Rick Rashid: Microsoft Research TechFest 2007

Transcript of remarks by Rick Rashid, Senior Vice President, Microsoft Research
TechFest 2007
Redmond, Washington
March 6, 2007

KEVIN SCHOFIELD: Well, good morning. My name is Kevin Schofield. I’m a general manager here at Microsoft Research. I would like to welcome you all here this morning. We have a little over 300 of you here in person. We’re also Webcasting this live out over the Internet. So welcome to all of you who are watching this live out there as well.

This, of course, is Microsoft Research Tech Fest. It’s our annual open house for Microsoft Research, the event that we put on once a year. We’re super, super excited about, where we bring our researchers from all five of our labs around the world here to Redmond, and share the latest and greatest research that we’re doing with the rest of the company. That event is really happening tomorrow and Thursday here on campus, and we expect to have at least 6,000 people from the rest of the company come through. Normally, this is a very closed event, but this year in sort of celebration of our 15th Anniversary of Microsoft Research, we decided to throw open the doors and see how much of this event we could actually share with all of you. And so we’re very thrilled that you can be here to join us for this today, and we’ve got about almost 50 demos in the room across the hall here.

Also this morning, we’ve got a couple of talks that we’ve prepared for you. We’ve got Rick Rashid, Senior Vice President, head of Microsoft Research, he’s going to tell you a little bit about Microsoft Research overall, Microsoft’s approach to innovation and the technology cycle, and share with you a few particular demos that sort of fit with that overall theme. After Rick, we’re going to bring out Dr. Rico Malvar, who is the head of our Redmond Lab, who is going to tell you a little bit about the work that we’re doing on search, and interaction, and collaboration. So it’s going to be a full morning.

After Rick and Rico come out, I’ll come back out and tell you a little bit about the logistics for the rest of the day, and how we’re going to give you all time to see the great demos, and give you some other opportunities to ask some questions of our lab directors, and hear a little bit more about what’s going on in the organization altogether.

So, overall, thank you for coming here. I know many of you traveled a long distance to be here. We appreciate your time, and thank you for being here with us.

At this point, it is my pleasure to introduce my boss, Rick Rashid, Senior Vice President, head of Microsoft Research. He’s actually been head of Microsoft Research continuously since he founded the organization 15 years ago, in September of 1991. Before coming to Microsoft in 1991, he was a professor of computer science at Carnegie Mellon University in Pittsburgh. He has a Ph.D. in computer science from the University of Rochester. He speaks Italian. He is an avid fan of Star Trek for which he makes no apologies, and I think you’re going to be fascinated by the stuff he’s going to show you this morning. So please join me in welcoming Dr. Rick Rashid. (Applause.)

RICK RASHID: Thanks, Kevin. I didn’t know he was going to throw in the Star Trek thing there. If you actually go to my office, you’ll see there’s a Star Trek logo on my office door. So I’ve had this tradition over the years that as long as I’ve been in the field, I’ve taken out everyone who has worked for me to each new Star Trek movie when it came out. So the first one was in 1980, I had two graduate students at Carnegie Mellon, so that was a really inexpensive treat for them, and for me, which was a good thing because I think I only made $21,000 that year. So now when I do it, I think the next Star Trek movie is supposed to come out at the end of next year, now when I do it, it’s 500 people and their families, and so I rent out a multiplex, it’s all my dime, and I rent out a Multiplex, and it’s a lot of fun. I usually dress up in a Star Trek outfit, too.

But I’m not doing that today. What I’m going to do today is talk to you about Microsoft Research, and what we’re doing, and give you some context for the TechFest event today, and the things that you’re going to be seeing. Now if some of you, I know, have been here before, and you’ve probably seen a slide that looks about like this before, for other people this is the first time. The key thing here is, this slide always changes. We keep growing. We’re now about 750 researchers. By the end of this fiscal year, which for us is the end of June, we’ll be close to 800 researchers working in five different research labs in six different locations around the world.

And to put that in perspective, that’s about like creating a Berkeley Computer Science Department Faculty a year every year for the 16 years Microsoft Research has been in existence. So it’s been an enormous undertaking for us to build a research organization of this size, and you can see we have labs in Redmond; Silicon Valley, San Francisco; Cambridge; Beijing; and the most recent research lab is in Bangalore, India. So it gives you a sense of what’s going on. Now, by far the largest group of our researchers are here in Redmond, but our second largest group is actually in Beijing. The next one after that is in Cambridge. So we’re a very international organization with roughly half of the research organization outside the United States, and about half of it inside the United States.

This gives you a sense of where the research labs are, and the activities that we have, but I wanted to highlight the fact that it’s not just about us, we also work with many universities and organizations around the world. So I’ve got up on that chart there a number of the institutes, and the various organizations we work with. Earlier this year, for example, I was in Paris inaugurating a joint research laboratory that we have in the Paris area with INRIA, the French National Research Organization. Last year, I was in Italy opening a computational biology institute in Trento. We have a joint academic collaboration center in Japan. We have many joint laboratories with universities in China, many joint technology labs in Latin America, and institutes that we’re creating in Latin America and in North America. So we’re very distributed, and we’re growing, and we’re trying to work closely with the rest of the world.

We have one of the most prestigious group of researchers in the world, and I think one of the things you should enjoy over the next day is really having a chance to talk to many of these incredible people, Turing Award winners, Draper Award winners, these are roughly the Nobel Prizes of computer science and engineering. Academy Award winners for their work in creating computer graphics, leaders of the Technology field, and if you look at the number of Microsoft people in the National Academy, we rival a major university in terms of the people we have as members of the National Academy.

But I’m also extraordinarily proud, and you’ll be seeing many of these people today as well, of the young people that are growing up in Microsoft Research, the new generation of leaders that we’re creating, and the fact that we’ve been running now for almost 16 years, and we’re really building a base of incredible research talent, and these people are taking leadership positions in their fields, winning awards, really establishing themselves in their fields with their achievements.

Now, if you’ve ever heard me talk about Microsoft Research, you will have heard me say these two things. We have had exactly the same mission statement for the entire time I’ve been here. In fact, in some senses, I’ve been really lucky that I’ve been able to create this research organization, and to be able to give it the same guidance for 15-plus, almost 16 years now. I mean, that’s an incredible record of stability. And, in fact, my joke among my peers at Microsoft is that, I’m the Microsoft executive who has had the same job the longest, with the exception maybe of Bill as Chairman, although he’s been Chief Software Architect on and off, and various other things. But I’ve had the real privilege of being able to run this research organization from its inception, and to maintain the same philosophy, and the same approach.

First and foremost, our job is to move the state of the art forward, and that’s what we try to do. And the reason that that’s our most important mission is because unless we’re really taking a leadership position in the fields that we work in, then we’re just not that valuable to Microsoft, frankly, we wouldn’t be really valuable to anyone. Our goal is to change the world, and to change the technologies that make the world a better place.

Now, when we have great ideas, I mean, the first statement says nothing about Microsoft, but when we have great ideas, then we work extraordinarily hard to get those ideas into our products. And events like the TechFest event today that you’ll be seeing, and then the employee event that we’ll have for the next couple of days are really intended to be part of that mission of how do we get our ideas into our products, how do we get them out into practice as quickly as possible. There are many things that we do, but this type of an event, where we bring in so many of our employees to see what we’re doing, is a great way of getting ideas transferred and starting dialogues about how things can change.

Now, I wanted to expand on the point of pushing forward the state of the art, because people often say, well, what does that mean? For us, what it means is, we’re taking a leadership position in the technology areas that we work in. We publish our work in the open literature. We subject ourselves to the peer review process. My feeling is, if you’re not doing that, you’re not doing research, you’re doing development. What we’re doing is, we’re out there showing to our peers in the academic community what we’re doing, we’re allowing ourselves to be criticized, and the work that we’re doing to be examined by outsiders, and that’s an important part of the way we work, and the way we operate. And if you look at major conferences in computer science today, what you’ll find is a significant fraction of the papers that are being published are coming from Microsoft. If you go to conferences like SIGGRAPH, or SIG IR, these are the major conferences in graphics, or information retrieval, you’ll see more papers from Microsoft Research than any other single organization. In the case of, say, SIGGRAPH, over the last 10 years, we’re published more than two or three times as many papers as any other organization. So, we’re really having an impact, we’re helping to further the discourse in these academic fields.

But we don’t just write papers. We participate in the research communities. We work with DARPA, and the National Science Foundation, the National Research Council, the National Academy of Engineering, and the National Academy of Science in the United States. We work with equivalent organizations in many countries around the world. We’re part of the dialogue about how research is done on a worldwide basis, and how the academic community can push forward the state of the art more broadly. So that’s an important part of what we do. We have strong ties to the university environment, and many projects that we do are joint with universities. In fact, you’re likely to see projects today as you walk around and talk with some of the researchers that came out of collaborations with universities, or that were part of efforts that we did with interns coming from the university environment.

Microsoft Research runs the largest Ph.D. internship program in the technology field. Each year we have more than 800 Ph.D. interns working in our labs worldwide, just in the United States, just this last summer, we had more than 300 Ph.D. interns working at Microsoft Research. And to put that in perspective, the United States only produces about 1,200 Ph.D.s a year in the field of computer science. So a large fraction of the students that come out these days with Ph.D.s in the United States will have worked at Microsoft Research at some time during their careers. One thing that that means is that when we talk about having 800 researchers, which is where we’ll be at by the end of this fiscal year, that’s really not the whole story. During the summers here, we’ll often have as many as 1,500 people doing basic research, counting our visiting faculty, our interns, and our visitors from other research organizations. So it really gives you a sense of the level of effort that we’re putting in.

A lot of times people ask me, and some of you may ask me later today, why does Microsoft invest in basic research. Why do you do the things that we do? And you can come up with a lot of answers for that. You can say, well, you’re doing it to create technology-specific things, you want to have new products, or you want to solve specific problems. That’s certainly part of it. But I think the most compelling reason that Microsoft invests in basic research or, for that matter, that our society invests in basic research, our nation, our world invests in basic research is to allow us to be more agile, it’s to make sure that if the world changes quickly, for whatever reason, we’ll be able to adapt, because we’ll have the technology, we’ll have the ideas, we’ll have the people to be able to do that. So if we have new competitors, we have new technologies that come up, if we have new business conditions that suddenly dictate that the company needs to change rapidly, our research organization allows us to do that. The chances are good we already have an IP portfolio in whatever this new area is because we have such a broad research group, the chances are good we have researchers that understand the state of the art and the field, and can advice the company as to the direction to take, and the chances are good that we have the physical capital within the company to be able to quickly staff and move into a new area. And the same thing can be said of the nation, as we invest as a nation in research, and especially in long-term basic research, it’s to make sure that we as a nation, or we as a broad world society, can quickly react to change.

Now one of the things that I think is exciting about this TechFest event, and frankly one of the reasons why I really enjoy it is to see the breadth of research, and the breadth of activities that are going on. We’re not just focused in a few narrow areas of computer science. Microsoft Research actually has a much broader agenda than you’ll find at most universities in terms of the broad field of computer science, and how that field of computer science impinges on so many other aspects of science and of our society. So you’ll see hardware devices, research in mobile computing, technologies that could support new businesses in emerging markets, search technologies, interaction technologies, systems and networking, fundamental, theoretical research, which is an important underpinning of what we’re doing.

Let me just give you some examples, and you’ll see some of these as you go on. For the last six or seven years now, we’ve been making incredible progress in being able to understand the sort of fundamental structure of software program, and to be able to prove properties of software at very large scale, hundreds of thousands or millions of lines. You’re already seeing that in what we’re doing with Windows Vista. In the Windows Vista Device Developer Kit, for example, there’s something called the Static Driver Verifier. It’s a proof tool that our OEMs and Independent hardware vendors can use when they’re creating device drivers to be able to prove that their device drivers are using our APIs in the appropriate way. So it’s an example of a widespread proof tool that has come out of research, and you’ll see more of this kind of work through the session today.

In many cases, our research is inspired by changes in technology. The dramatic increase in our ability to store information is fundamentally changing our relationship with information, and changing the kinds of applications that we can imagine building. This is really what I call the age of human scale storage. And you’ll see example of that throughout the day as you see various projects in various kinds of demonstrations.

To put it in perspective, for those of you who haven’t gotten this insight, you can now go out and buy a terabyte of disk space for under $500. With that terabyte, you can store every conversation you personally had from the time you’re born to the time you die. We’re now able to keep track of an image that could be taken every minute of your life, and keep it on that terabyte of disk. It dramatically changes the way you think about your life, and the way you think about storage, and the way you think about running your small business, or your medium-scale business, or your large-scale business, because now we can keep track of and record and store transactions in ways we never could before.

An example that came out of this insight is work that we did in our lab in Cambridge, England, called the SenseCam. And what’s interesting about this, you’ll see many examples of these sort of sensor-based technology in the demo sessions today, what’s interesting about Sense Cam is, it’s really the notion of saying, we can now imagine putting a black box on a human being, something that can record images, and audio, and motion, and location, and sound, and heat, and infrared, a wide variety of different kinds of sensors. Now, when we first started this work a few years ago at Cambridge, there wasn’t really an application. In some sense, it was saying, well, we can do this, so let’s just do it, and let’s see what happens. What’s been interesting is, we’ve gotten a tremendous amount of excitement. Many universities around the world now are using these devices for experimentation. We’ve been working with DARPA that’s interested in using these devices in some of their experimentation. We’ve been working with some police departments that are interested in these kinds of devices that will let them instrument their officers. But interestingly enough, we’ve also hit upon applications that you wouldn’t have imagined when we first began. We’re doing clinical trials in England with doctors looking at memory loss patients.

And here’s an example of a particular case study that’s been done with a woman that is unable to remember events after a few days because of limbic encephalitis that she suffers. And what’s exciting about this is, by allowing this woman to record an event using this device on her person, and then giving her the opportunity to review that event through the images stored in the device, and this is done in a particular way, I won’t go into the details of it right now, she is actually able to remember the event over a long period of time when no other technique allows her to do that. So this is extremely exciting. A lot more work has to be done to replicate this, but it gives you as sense of how broad the research is that we do. How far a field we’re willing to work in order to see where technology can take us, and how the implications of research in computer science can really change many fields, not just the computer science field, but medicine, biology, physics, and so forth.

Now another area that I’ll bring up is related to storage, is this notion that we can now not just store what happens to a person, what may happen to a small business, but we can think about storing what’s going on on your planet. Going back to 1998, we began an effort called the Teraserver to put online a greater than terabyte database of images of the Earth’s surface. You can think of this as sort of the granddaddy of the Virtual Earth, or Google Earth, or Yahoo Maps, or whatever you want to call it that people are doing today. It was the first attempt to really get this kind of technology out there.

I should point out, in that picture you see the space field, this was one of our early images from the very early days. Of course, now that technology has been transferred to our product groups. They’re using it as a basis for Local Live, and you can see the kinds of images that you can find today in these sort of side angle views. And more and more technology is being applied to this kind of space.

The whole effort with the Teraserver led us to think about, what else can we be doing. And we went on to look more broadly at, not just looking at images of the Earth’s surface, but could you work with the scientific community. In this particular case astronomers, and think about building something like the Teraserver, but for space. And that idea produced something called the Sky Server. It’s a work originally done out of our lab in San Francisco by Jim Gray, and it’s really helped to change the way we think about working with the scientific community.

Now, I would like to introduce Curtis Wong, who is going to step us through some additional further work that’s been done in this space as he talks about going from the Sky Server to a real worldwide telescope. Curtis.

CURTIS WONG: Good seeing you. Thank you.

About six years ago, Jim Gray met with Alex Szlay at Johns Hopkins University, and started thinking that the Internet was really the opportunity to be the best telescope. As you look at the trends that were happening in terms of growth in telescope mirror glass, it was doubling every 25 years. But when you looked at what was happening in terms of the data coming from CCDs, that was doubling pretty much every two years. So astronomy was changing from an observational science to more of a computational science. And the opportunity really was there to be able to think about how we can take this data and make it accessible to everybody. So that was Jim’s sort of original vision for the Worldwide Telescope.

One of the things we’re trying to do now with this is, if you could imagine the difference between Teraserver showing you images, and what we can do with Virtual Earth right now, it’s to think about how that same information can be brought together and federated from multiple surveys. If you’re thinking about the first one that we did, which was Sloan Digital Sky Server a few years ago, we’re bringing that together along with images from Hubbell, and potentially other surveys as well.

What’s also really interesting is to think about how such material can really be used for educational purposes. When you look at what happened with the Sloan and Sky Server site, the traffic on that site pretty much doubled every year. So there were over a million visitors with 47,000 hours of educational content happening, and if you look at the site, one thing is really interesting about the Sky Server site is that it allowed sort of teachers for the first time to create astronomical exercises that they could assign to students and have them go and use real data. I mean, there aren’t many scientific disciplines out there where you’re on the same footing as professionals, if you will. And so one of the exciting things that we’re trying to do is sort of extend it beyond just data, but to bring in rich media. We want to take it beyond seeing beautiful images, but really have rich annotation so that you can understand what’s there.

Part of that is to build this linkage between rich media and our Virtual Sky. And the Virtual Sky, this lower box you’re seeing here, in terms of spatial expiration, is the first part that I’m going to show you today that we’ve been building. So I figured we’d start here over the conference center, and come back out, and of course you’ve all seen this before. Here is our Earth. But what we’re going to do is, we’re going to go out into the sky. And so we’ll flip around here. And what you’re looking at is probably the most recognizable constellation in the sky is, the Big Dipper, everybody knows what that looks like. But did you know that if you look down, and you zoom in a little bit, you can see that, one thing you realize when you start doing this, everywhere you look, there are more and more galaxies. This particular galaxy is called M-106. And it’s a particularly unusual galaxy because at the center, we’ll zoom in here to the center of it, is a super massive black hole, a black hole that has more than 36 million solar masses in a fraction of a light year. And you can see in the visible lighter, there’s this noticeable sphere in the center, and if you look at this in infrared, or X-ray, you’ll see massive amounts of ejector coming out of that.

Let me show you another example, just in the neighborhood of the Big Dipper, if you will, if you look below the handle of the Big Dipper, last night we integrated the Hubbell images with it, this is a galaxy called M-51, also known as the Whirlpool galaxy, and you can go in as deep as you want, and imagine how great this would be if you had astronomers from Harvard, and Space Telescope being able to create rich media tracks through this that you can share with other people, or for your own exploration to be able to go through add your own content, and create paths and share those with other people.

Up above the Big Dipper, there’s another object. So we’re going into a section of the Sloan Survey. The Sloan has not mapped the entire sky, so we’re looking at part of it. There are two galaxies here. This one is called M-81, and the one right here is called M-82, and they’re both about 25 million light years away. And they’re actually very close to the same plane. There’s actually quite a bit of gravitational interaction going on between both of these galaxies. In fact, if you go into M-82, you can actually see some of the effect of that gravitational interaction in terms of the creation of compression from gravity of the hydrogen into these massive red hydrogen clouds that are coming out, as well as large super nova, which are creating some of the dust clouds that you’re seeing here now. So that’s it for this demo.

We’re going to be in TechFest in Booth 112, and we’re going to have this same display demo running on a large nine LCD monitor display. So I invite you all to come and see the great work that Jonathan Fey and Jina Suh have been working in our group on this, and it’s real exciting because our vision for this thing is really that a kid that doesn’t have a telescope, which was me about 50 years, 40 years ago, could go online and be able to get engaged, listen to a lecture maybe from Harvard about astronomy, and go as deep as they wanted, perhaps to a lecture perhaps about galaxy collisions, and then see something from an episode of NOVA that will connect some of this stuff together, and be able to then browse the sky, and then see other objects that are related to it. And then come back up into a different narrative that relates to helping them understand what the sky is. So that’s our vision, and it’s also been Jim Gray’s, and Alex’s, too, and I invite you all to take a look. Thank you. (Applause.)

RICK RASHID: Great. Thanks, Curtis.

I love the idea of being able to sit in my home and be able to just explore space, since I’m unlikely to be able to get there any other way.

Let me just tell you about a few other things that you’re likely to see as you go around today. There are technologies we’re looking at, for example, for the home. You think about here you can have sort of a virtual telescope in your home, we’re also looking at ways that you can have a virtual presence, a remote presence in a home, that people can share where they are, and what they’re doing, even when they’re remote, so that the people within a home can see them. And you’ll get a chance to see that, along with many other kinds of home technologies like the Bubble Board, which is really a new kind of telephone answering system that lets you see not only that there are messages waiting for you, but visually who they’re from, and how they’re connected in time, and how they interact with each other. This is work coming out of our research lab in Cambridge, and again I encourage you to walk around and see some of these things.

There are also some things which I think are very interesting in that they’re both trying to solve problems, but they’re also looking at what kind of positive impact you can have. Now, I’m sure that you’ve probably heard the joke that on the Internet nobody knows that you’re a dog. I didn’t want to include the actual cartoon, because I assume it’s copyrighted, but I’m sure you all know what that cartoon looks like, a little dog typing at a terminal. Well, one of the problems that people do have on the Internet is trying to figure out, are you a person or not. A huge amount of the traffic these days to search engines are bot traffic, people trying to effectively manipulate search engines in some fashion, or change the way they behavior, or people trying to interact with various kinds of software systems that exist on the Internet. So there’s value in being able to determine if it’s a real person, or is this some computer that’s trying to do that.

There have been a number of great approaches generated, and many of you have probably seen these on the Web today. At Carnegie Mellon, they came up with something called a Capetcha, there’s really this notion that you can create something that looks like text to a person, but doesn’t look like text to a computer, or at least ideally doesn’t, and you can ask the person, try to find the letters in the sort of messed up text, into the warped text, and therefore hopefully determine you’re a person rather than a very clever computer program. The problem with these approaches are, though, that the software is getting good enough that it becomes increasingly difficult to produce these kinds of images that a person can handle that a computer can’t. And it’s especially true for people that may have various kinds of disabilities. And so that’s a potential concern.

Some of our researchers hit upon this idea of saying, well, what’s a really hard problem that people are good at, but that we don’t really yet know how to do with computers, and I’ll point out that some of these researchers were also great pet fans, and they said, you know, distinguishing cats and dogs isn’t as easy as you might think. Here you see pairs of images of dogs and cats, actually look a fair amount like each other, and the idea here is that you actually ask a person to say what’s a dog and what’s a cat. So these things are cats, and those other ones are dogs. What makes it look like a dog and what makes it look like a cat? Well, that’s pretty hard right now still. So they can’t come up with a way, effectively, of querying a person and seeing that.

There’s a Web site that’s being brought online, we’re actually doing this in collaboration with Petfinder.com, and with the Humane Society. So not only will people be figuring out whether they’re real or not by clicking on images of cats and/or dogs in this, but they will be actually looking at real animals that you can adopt. In fact, for the very first year at TechFest, to my knowledge, this year we have non-humans in the TechFest room. They’re actually cats that are part of this particular demo exhibit, and they’re available for adoption. So I’ll let you think about that as you walk through, through the day.

Now, there are other issues that cause us to think about new ways of doing things. One of the concerns that people have right now is that we’re really at a kind of a low point in terms of the interest among young people in computer science, and really in some senses in engineering in general, but in particular computer science has been very hard hit. Here is a graph that comes from what’s called the Taulbee Report of the Computer Research Association that shows the number of newly declared undergraduate majors. You can see there’s been a tremendous decline since roughly 2000 in the number of undergraduate majors.

I’ll tell you from personal experience, there’s a lot of jobs out there, those jobs are going to go begging over the next few years, simply because we won’t have the young people coming out of the system that will be able to take them. Here is another graph that shows the level of interest, this is from a UCLA study, that’s been done since the 1970s, and we’re at an extraordinary low point that hasn’t been seen since the 1970s, in terms of interest in computer science and computer engineering.

Well, some of our researchers took that as a challenge and said, well, how can we get students, especially young people, really young people, more interested in computing, more interested in learning about computer science. So I’m going to pass the baton now to Matthew MacLaurin from our creative systems group, who has come up with a way of teaching young people how to program.

Hey, Matt.

MATTHEW MACLAURIN: Good morning.

RICH RASHID: Good morning. There you go, there’s your clicker.

MATTHEW MACLAURIN: So I started programming about 26 years ago. I was lucky enough to grow up outside of Silicon Valley in a small town in the Santa Cruz mountains, and back then programming was a really exciting thing to get into. Basically, in 1980 I got a Commodore Pet 2001. A friend our family was a systems analyst for Bank of America, and it was the first time I ever stayed up past 2:00 in the morning, programming m own little games on that machine.

Since then we’ve really seen the industry mature. For a lot of people it’s become a very profitable career, and in a certain sense the financial success of the computer science industry has kind of overshadowed the fundamental creative excitement to programming. So I have a daughter now, she’s 2-1/2-years-old, and many of you who are parents, I’m sure, have had the experience of explaining something that is important to you to your child for the first time. And I found that that’s a really clarifying experience. You really stop to think carefully, what is this really all about, what are the fundamental essentials, what makes this important. And so we really set out in this project to answer that about programming. There are several good reasons to do it.

We really want to  a lot of people when they think for the first time, programming sounds fun, I’d like to check that out, what they’re faced with is suddenly a large screen of very complex, arcane, esoteric text. And it’s usually not what they had in mind, particularly today when a lot of kids get to play really spectacularly visual videogames, it’s very daunting when they’re then presented with something that looks absolutely nothing like a videogame, and is very far removed form the experience that they want to have.

So we want to get to kids early, and show them what programming is about, minus some of the intimidating factors. And we think this is something that can really help kids, regardless of the career they go into. And it’s also noteworthy that some of the fundamental breakthroughs in computer science have actually come from efforts that were targeted at children, object oriented programming is largely credited to the Small Talk work that Alan Kay did, again, to teach programming to kids a long time ago.

So what we’re doing with Boku is an incredibly simplified visual programming environment. Visual programming environments have been done before, some of them, although you think visual, it’s going to be really easy to approach, they can get complicated really fast. So we took some very specific strategies to keep this simple that I’ll show you in the demo.

There’s no typing, there’s no keyboard involved in this at all. As a matter of fact, you can run the entire programming environment on an Xbox or on a PC. And it’s really impossible to get a syntax error. The program will only let you construct valid code, and the only controller you need is the Xbox game controller. We have this little paradigm where there are these little racks, and you have little tiles that you put on them, and that’s the process of creating software.

Now, combining gaming and programming together is something you really have to approach carefully, because we really feel like there’s a really strong, deep research agenda here. And it’s okay to do that, and be fun and beautiful at the same time, but you don’t want to distract yourself strictly with the visuals. So the basic set up here is we’ve got this little fellow Boku, he’s a little robot who lives on a tropical island. And he needs programming to succeed. He’s got these challenges, and he needs you to give him a little program to help him meet these challenges. And it’s very important to us that the experience, the user interface be very fluid, and you can move back and forth between gaming and coding, and sort of see your experiment in progress, and manipulate the code as it’s running.

So with that I’ll go over to the demo now. Let me just fire it up. Okay. So this is our basic startup screen. I’m going to show you a little teeny world, and I’m going to go ahead and write some code right on stage here. So here’s Boku. Let me freeze the scene. So he’s in a little world, and I’m going to go ahead and take him out of the scene, and we’re going to start from scratch and make a new little program for Boku.

So notice I’m just using the Xbox control, where everything kind of feels kind of smooth and gamey, and so in the world, I’m in edit mode now, you can tell by the big column of light. I can add objects, so I’ll add a little apple, add maybe a green apple, and another red one, and then let’s put a Boku in the scene.

So now I’m going to run the program, and he’s not going to do anything. He’s looking around sort of letting you know that he’s running, but I haven’t given him a program yet. So let’s give him the perhaps fairly obvious program, we want him to eat an apple. So I’m going to open up his little brain, and this is what a blank page looks like in this programming environment. And we want this to be very intuitive, because we know most people when they pop a game in don’t reach for the manual first, they reach for the controller first, and they want to get right into it. So we really want to encourage people to play around, see what happens, and sort of stumble across how the thing works. I know how it works, so I’m just going to show you.

So in this first column here we’re going to ask Boku to look for  to use his senses. I’m going to tell him to look for something, and then I’m going to tell him to look for something red. Then I’m going to tell him to do something when he sees something red. So in this case I’m going to tell him to move, and let’s have him move towards the thing that he just saw. So I’m going to run the code now, and we see him, and he does, indeed, move towards the red apple. So I have now just successfully written and executed my first program, and there was no giant screen of text. There was no crash.

Now let’s go a little more advanced, and I do mean just a little more advanced. So if we go back into edit mode, now I’m going to tell Boku what to do when he gets to that apple. So I’m going to add another row. I’m going to say, this time when you bump into a fruit go ahead and eat. So he goes up, chomps on that apple, and that’s debug output. If any of you have written code you’re familiar with the printf statement, which is sort of how your program tells you its internal state. Boku just uses a little speech balloon. So these are real computer science concepts, but they’re just presented in a way that’s fun and engaging.

So that’s probably enough on this level. Let me show you some of the other stuff that we and other people have done with this. So here’s a little thing, a tiny little soccer game. So here we see the Bokus, and now they’ve learned how to kick. So they can cruise around and they’re each kicking the apples in different directions, and they’re trying to score by going between the little palm trees there. And again, I can stop at any time, and I can go in and look at this code. And this is something we want  we wanted kids to be exchanging their programs. If any of you have played Xbox Live there’s a strong social and sharing component. So we want kids to be able to send these programs back and forth, maybe every now and then a new little programming title will be released, and you’ll be able to add it, maybe you’ve learned to fly or tunnel, or dig, or something like that.

Let’s look at another level real quick. So this is two Bokus having a little context, each of them is trying to turn the little lanterns into the color they prefer. So one of them is turning them red, one of them is turning them blue. Again, this is simple little code here. So we’ll open up this Boku to see what his programming is. He says, if I see a blue one, then I’m going to move towards it. And then this down here says that I’m going to glow red. This over here says, if I don’t know what to do I’m just going to wander around, if I haven’t seen any of the triggers.

Now we’re going to give this guy a little teeny advantage. We’re going to say when he sees one that’s blue, that’s been set up by his rival, he’s going to move there move quickly. So now when we run that, he’ll move at a normal pace towards the unlit ones, but he’ll speed up when he sees one that was turned blue by his rival. And this one, it was funny when I was first setting up this level, preparing for the demo, what he started doing was overshooting, because I’d made him faster, thinking that would make him win, but then sometimes he overshoots the lamp and then he can’t see one in front of him, and he sort of gets confused, and slows down. So where I intended for this change to the code to help him win faster, it actually didn’t give him a tremendously strong competitive advantage.

This is exactly the kind of thought process we want to get kids into. You set up the program, the program is doing exactly what you told it to, but it’s not doing what you meant. And that’s programming, and the fact that we’re able to get people into that kind of thought process with a user interface that looks like this is something that’s very exciting for us. And my daughter will sit there and explain it to me, she’ll say, Boku only eat red apple.

And so just an example, a last little peek at a world that’s a bit more complex, because we know kids are playing some very, very fancy, big budget videogames, and we know a lot of them will want to emulate these kind of scenarios. So here’s a complex world that really it only took a couple of hours, but you see there are these flying saucers, they’re flocking around, and they’re looking for Bokus somewhere over here in the hills. There’s one Boku, and he’s going around kicking green apples, and eating red ones. And then this fellow here is following this Boku, and whenever he sees a saucer he’s going to shoot a little toy missile at it. So he’s trying to protect the apple gathering expedition.

So that’s about as much as I’m going to show right now, if we can go back to the PowerPoint, please. So next I’d like to introduce Lili Cheng, who is my boss, and I think one of the best software designers in the business, who was recently the director of design for Windows Vista. And she’s going to show you Mix, which is the project that brings the Web to your desktop.

LILI CHENG: So as Matt said, we just did a little stint with the Windows team and worked on desktop search, and Windows Vista. I think one of the things that we’ve really realized coming back is that search is really here to stay and it’s incredibly powerful, and in some sense it’s incredibly powerful, because it gives you such simple access to really an overwhelming amount of information and information that increasingly is being stored in all different places, local information, information in your enterprise, and also on the Web. But, search today is pretty much a solitary experience. And everything that I do on my computer, literally everything that I do, I share with other people, either on collaborating with other people, on viewing things that other people have created for me to see. So what Mix really looks at is Mix looks at letting you author and share the searches, and results that you collect over time.

So what I’m going to show you today are a few just little mock-ups for the project, and then I’ll show you the actual project. So first what we see people doing is things that they always do. They’re going to go, and they’re going to search for a collection of things. So this is just a collection of things that we were looking at as we were showing the Boku project on graphics, and what kind of graphics engines we should use.

Then what I want to do with that collection, I don’t really want to collect it as a bunch of  share it as a bunch of links, I really want to author it and make it something that represents the collection of information that I’m searching for, and add annotations, and maybe adjust the view.

Then I really want to publish and share it. And this seems very simple, but it’s actually kind of tricky, because a lot of the information that I might be searching is just on my local machine. So what we’re doing behind the scenes is taking a mix, which might be referencing local content, and copying it up to a place where it’s mutually accessible to everybody that you’re sharing with, and then updating that information on an ongoing basis. And then hopefully all your recipients can actually have live access to the data that you’re sharing.

We did a lot of design sketches, because we really wanted to be sure that the things that you were sharing were beautiful representations of stuff that you care about. So we’ve learned a lot from social networking applications, and blogging tools, to really help empower people to customize their views. And then we also want to make mixes  we don’t want you to have to learn a lot of new communication tools in order to use a mix. You should be able to access a mix through e-mail, through your RSS viewer, on the desktop, et cetera.

So this is a desktop search app, and we really wanted to make sure that desktop search looked beautiful, that I could kind of come in and look at all the things on my machine, and have this be a really pleasant experience for people to really sort of embrace and celebrate the data that they have.

So let me create a mix. So let’s  what I like to do is I have a little template, I’m going to create a Rick Rashid mix. I’m going to call it Rick. I’m going to type in Rick, that’s going to be the name of my mix, and I’m going to say, let’s look for all the documents about Microsoft Research, and let’s do a search out on the Web for Rick Rashid. So really I’m just inputting search terms. Let’s see if anybody has posted any funny pictures of Rick on Flickr. And maybe since he loves Star Trek let’s look at Star Trek.

So I finish that up. And what the app does is it just automatically creates an assembly of information about in this case a person, so it’s Rick. Then I can come in and I can say, you know, I want these views to be slightly different. So these are all the things out on the Web about Rick. So maybe kind of more a  someone actually did post a picture of Rick in Flickr. Then I can see previews of all the Star Trek videos out there.

So I’ve gone ahead and created some of these for people and things that I care about. So I, for example, have a friend named Mimi, and a lot of information about Mimi, she’s not actually authoring. People are posting pictures about her, or posting her talks, and she’s a researcher, and this is just a great way for me to keep up with things that she’s viewing. Even for myself, I might want to track what are people saying about me, and what are people looking at on our internal Microsoft site about myself. And what I can do is just subscribe to these, so I’m not tasked with constantly going out and searching for things, and I can just subscribe to this and view these in any RSS reader.

So here’s another example, we have little templates for making projects. So one example for Boku, if Rick wanted to know, or anybody in the research team wanted to know what was going on with Mick, they might not want to bug me and send me e-mail, and they definitely don’t want to subscribe to all our e-mail aliases, because we just have a lot of information that we’re sharing. But, we’re totally open and we want people to have access to the information that we’re working on. So this, again, is a great way you can just subscribe to the Boku mix and see it when you want to, but you don’t really need to be overwhelmed with so much information coming to you in your inbox for sharing.

Here’s one for Vista, which is just fun, it’s the team had a big party, launch, and I can track what things people are posting, what things they like about Vista up on the Web, or an example that you might think would be really great, and I wish we had this today, was I would love it if we could actually share with you guys more information about the TechFest demos and the lectures, and research papers that we’ve written and stuff like that, but it’s still too hard to get that stuff to you, and too hard to make it accessible to you, because it’s constantly changing, and I certainly don’t want to e-mail it all to you. So this is a great tool for tracking information.

So I think that’s it for the demo. I can go back to the slides. So really next steps for Mix is we really want to get this out there, because we want to understand if we’ve gotten the privacy and the sharing models right, so people feel in control of their data. So that’s really our next step probably over the next six to nine months, we’ll sort of open it up, let people  and really try to understand what kind of searches and search results people want to author and share with each other.

RICK RASHID: Okay, Lili. Thanks. (Applause.)

So I hope you get a sense of why I get excited about TechFest. It’s an interesting thing that every year I see projects I’ve never seen before. When you have such a large group of people, and so many exciting things going on, and frankly, some of them being done at the last minute just for TechFest, it gets to be pretty exciting stuff. We’ll have over 7,000 Microsoft employees come to this event. This day is really for you. The next two days will really be for them. And to give you a sense, that’s between 20 and 25 percent of all of Microsoft’s employees in the Puget Sound area. So it’s a really big event. We get a tremendous amount of follow through from it.

We do a lot of other shows, a lot of other events, conferences, symposia, we’re out there around the world, interacting with the academic community, interacting with the business community, and with governments around the world, so this is just part of a series of events that we run all throughout the year. And here’s just some quick facts.

So I want to, again, thank you for being here, this is extraordinarily exciting. I’m glad I had a chance to talk with you, and have run through the rest of your day. Thank you.

KEVIN SCHOFIELD: Thanks Rick.

Now it’s my pleasure to bring out the managing director of Microsoft Research in Redmond, Dr. Rico Malvar. Rico actually joined Microsoft in 1997, prior to that he was a Vice President of Research and Advanced Development at PictureTel. He has a Ph.D. in electrical engineering and computer science from MIT, and he’s a native of Brazil. So please join me in welcoming Dr. Rico Malvar.

RICO MALVAR: Thank you, Kevin. Thank you very much. It’s a pleasure to be with you. We really appreciate you all being here, joining us for this exciting event of TechFest. Rick already mentioned the basic things about our labs, and some examples of the very cool technologies we develop here. And I’m going to zoom into a little bit in the area of search interaction and collaboration, and talk a little a bit about some of our ideas moving forward, which hopefully will be a motivation for you to go to our demos and talk to the researchers themselves.

So just to remind you, we have many interesting demos in so many areas and we cluster them in six major areas. You will see examples of very cool projects in all of these areas in the demo floor. In particular we’re going to zoom into the search interaction and collaboration, and I’m going to talk to you a little bit more about search.

So if you think about search, the main problem all of you  all of us, we do Web searches all the time. We go to the search engine, we type things, and we want results. And the key problem is relevance. And that was a great example of collaboration between our labs and our product teams. In fact, Microsoft Research Redmond teamed up with Microsoft Research Cambridge, and we developed a new machine learned approach, based on new features from the words in the Web pages, in a huge neural network, and a bunch of sophisticated technologies, working together with the product team we end up with better ordering of results, significantly better. I’m sure you all noticed that in the past few years the quality of results from Live.com has improved quite a bit, and that was a great example of collaboration, research in different labs, and the product teams.

Now, then the question is, what’s next for search. What are we going to do moving forward? So, as an example, how do we evolve search? Suppose you are working, you’re a high school student and you’re working on something about football, and then you want to write a chapter about Reggie Bush. Then you go to the search engine and you type Bush. Well, you can expect what would happen if you just do that, the search engine will nicely show you some sponsored sites, some links. It will suggest to you a few things on the right side, but the first big hit is actually going to be something about George Bush, but that’s not what you’re looking for.

So in this case, the search results, they did a good job of suggesting to you Reggie Bush, and you could have clicked on that, but it would be better if the computer knows I’m working on a football article, shouldn’t the Reggie Bush things come up first, because I’m probably working on that? Suppose actually that in that article I’m in a section that I’m talking about the Reggie Bush fan club, because I want to talk more abut how the fan club started, and how can you join, and things like that. Then if the computer knows I’m doing that, wouldn’t it be nice if the results already showed, not only Reggie Bush first, but the Reggie Bush fan club first.

So the idea is that the search engine would now have some knowledge of what you’re doing, right, in complicated client-server architecture, but at the end you get the results more likely to be of interest to you at that moment for the task you’re at. So I just show a mock-up in there. You could have search results where you’re actually not typing anything, things just happen. So it’s kind of an evolving, and implicit and personalized search. In fact, in the demo afterwards Silviu is going to show us a good implementation of this idea, of trying to disambiguate things based on what you’re working on.

Now, search has a lot of math in it, and I just wanted to show a little bit of one of these examples where actually you have to do a lot of combinations of sophisticated algorithms and machine learning algorithms and statistical analysis, and in one of our demos that you can see this morning, which we call pictures of search relevance, where we basically build these graphics that shows the connection between the links you see on search results and those can be helpful in predicting, if you click on this one, then you’re more likely to click on the other one, or if you saw this one, maybe this is the next one I should be showing you.

So these arrows pointing in the graphs, they basically connect the probabilities that you’re going to visit one site after the other. And you can see how many things you can do with that technology, which is what we are exploring now. You can even change the order of the results, based on that. You can use that to feed the advertisement engine, to do a better job of positioning ads, all kinds of things, just to give you an idea that there is a lot of very sophisticated math behind the scenes.

Another topic I’m now switching outside of search, I want to talk a little bit about collaboration. One typical example of collaboration is when you’re brainstorming in an idea together. And the typical way we do that today is we go to a white board, pens with ink, and do that. But, suppose you’re here and the people you want to collaborate with are in a very different location. You can’t be at the same physical white board. You could buy a very expensive electronic white board, that would then exchange things, but then you would need special pens, it would be difficult to use.

So one of the ideas we have in our minds is as we develop new technologies, how can you interface existing technologies with the new one so it’s actually easy for you, because you already know how to use the technology. So can I convert a physical white board, we have about 40,000 of those here at the Microsoft campus, just to give you an idea, can I leverage that, and make the ink you write show up on the other side. All of you could imagine, sure, I put a Web cam, point it to the white board. But, then how do I put in the white board what the person wrote on the other side, you say, well, I put a projector. Right. So I project on the white board, and then whatever you write I pick up and ink. But, I have to project what happens with that image that you capture. If you capture the image and I project it back it’s going to produce a shadow. And we’re going to see an example of that, and then it creates a very interesting computer vision problem. How can I make that work, and remove the shadows, so you can actually share the same image on two physical white boards, with ink, and everything. So let me show you an example of that. That was produced by our collaboration team here in Redmond, with collaboration with some folks from Asia, also.

So you see you are writing on the white board, because what you capture from the Web cam is projected back again, it produces shadows. We call this visual echo, but the idea is that you can now use computer vision algorithms to predict that echo and cancel it out in the video that you project back. Now the demo with the visual echo cancellation, and as you can see now you write and there’s no more echo, and in the remote location you get an image of what that person is writing.

Now you can write on top of it, so it’s a combination of real ink, the one you’re writing, and the image of the ink from the remote person, and whatever you wrote you can go back and erase, and that will be shown in the other side. Then the person can go and erase their side and show  this is so useful, because in tests we did with a few people, they were trying to erase the remote ink, because it was hard to distinguish which was which. You see the lesson here is that you can actually leverage existing technology with a very simple system, and then just through the magic of software provide a very good tool for people to work.

Okay. At this point then I would like to show you a few demos. The first one this new ideas for search, with recognition and disambiguation of entities in text. And Silviu-Petru Cucerzan will join us and explain what that means.

SILVIU-PETRU CUCERZAN: Hi, I’m Silviu-Petru Cucerzan, from the Tech Mining, Search and Navigation Group, and I will show you today a prototype of a browser incorporating a technology that aims to change the way we interact with information, moving us from a world of informationally disconnected software boxes to a world  a rich information world in which the applications can communicate with each other, and in a manner similar to Rico’s example, where they are aware of the context from the other application.

That’s because this communication is done in a world of concepts, rather than just a world of bits, and maybe words. So this is an example, so I was reading yesterday this new story on Dick Cheney, and you see that the application processed the news story, and it extracted all the important concepts in here, such as blood clot, and if I click on that it’s actually disambiguated, it took me to the right reference page, in this case a page on Wikipedia, but it could be also Encarta, or Web MD, and so on. So it knows all these locations, all these people, and it can take me to the right information. So I know if I click on DVT, right, it gets the deep vein thrombosis. So I can interact with it and learn more.

So I can do Web search directly, so let’s say that I search for DoD, to disambiguate it correctly we can look at what are the possible associations DoD has, right. We’ve got the right one. But, if we look at the search results on the Web what’s happening is these are the regular search results, where we didn’t use the context of the article, and you can see that some are right, and some are actually wrong in our context, right. So it’s good that we have a mixture of results, but not for this particular context. So what’s happening is, let’s do exactly the same search, where actually I’m saying use the context of the article, and now suddenly all these search results are relevant to my search.

So let’s now say that I move to a different topic, and these are a few articles from last week, and I’m right now working on a new story on some event concerning the LSU football team. So with the click of a button I can tell the application, process this news story, and extract for me all the important information, and provide relevant  provide relevant information to me. Sorry it’s the highlight. Somebody played with the highlight. So I can do it so that it’s not intrusive, right. So I might see exactly the same story without highlights, but I can actually see it with highlights.

And what’s happening is, for example, Reggie Bush here is disambiguated, it’s Reggie Bush, and the Tigers are disambiguated as the LSU Tigers, and so on, right, Carroll, Pete Carroll, and we have New Orleans Saints.

Now, when we are in this space of concepts what’s happening is, we can create semantic bookmarks, because we don’t have to populate a flat list of bookmarks anymore. We can bookmark pages based on these concepts. So, for example, you can look here, I created a few bookmarks for the New Orleans team that I’m interested in. The official site of the New Orleans team, the place where I can buy tickets, or a message board for New Orleans teams.

This is as simple as I’m just going, find the results that I like, right, so and then I can say bookmark this. I already bookmarked this, so let’s go back. Let’s say this is a page that I like. Okay. It has some interesting links to it, so I say bookmark, and now I have that bookmark in my list. Right. So if I go now to Reggie Bush here I can have bookmarks for Reggie Bush, and so on. And I have images, I have news, all this stuff. I can  it also shows me other associations. So in case it didn’t get it right, the association, I could actually go and change it.

For example, here, right, in this context you see could be disambiguated in multiple ways, so it didn’t disambiguate it for me. What it does, it provides me a list of possible disambiguations. So what the user can do is go, find the right disambiguation, and then say, I want to create that association. And what’s happening is now USC is associated University of Southern California, if the user chooses to help other users, and the system sent this feedback to our server, and then we can apply machine learning to learn from this example. The user can actually select any word or any term in this document, and perform search, and if they like what they find they can bookmark that, or they can actually create hyperlinks in here. So they have this personalized view of the world. Over time, they can create this personalized view where they have access to all the information that they need.

This is what I am showing right now, and I invite you to the booth. I can show you more stuff there. Thank you. (Applause.)

RICO MALVAR: Thank you very much.

Okay, going back to the slides, we’re now going to switch to interaction, especially interaction among groups of people.

And for that I introduce you to Mark Smith, who manages our Community Technologies Group. Mark.

MARK SMITH: Thank you, Rico.

Good morning. Hi, I’m Mark Smith. I’m a sociologist here at Microsoft Research in Redmond. I’m going to show you a bit about our work on a project we call Community Buzz, tools for getting the big picture out of online community content. You may have noticed that online communities, and the many synonyms for that phenomena are happening in a big, big way out on the Net. It turns out that people are the thing that bring people to the Net, and here are just a few of the key words, or names of systems, or scaffoldings for interaction for social engagement that happen on the Internet.

The interesting thing for me is that we are living through a kind of speciation event, one in which new forms of interaction are emerging online pretty much a few times a year. Who would have thought of social networking, or Bit Torrent, or folksonomy, or social networking systems, or my favorite new buzz word mobile social software, or MOSOSO, these are words that we didn’t even say a year or so ago.

This has become such a phenomena that it’s actually the person of the year for Time Magazine. My issue with this issue is that Time Magazine uses the word You as if it was the singular You, and I come from a city in the United States, Philadelphia, where we actually pronounce you plural as yous, and I think that’s in some ways the more accurate name for the person of the year, not you, just one of you, but you, many of you. And if you want, we can translate it to southern and make it ya’ll. My point here is that the collective power of content creation is now a source to be reckoned with. The problem I find is that these are the typical interfaces to many of these kinds of spaces, a kind of interface that essentially shows you the branches, or the leaves, but not the trees, and certainly not the forest.

So what’s missing? What we’re looking for is something that essentially allows you to do the kind of thing that we have become accustomed to doing with these geographical and mapping systems, essentially allowing you to zoom backwards, outward from the detail of any particular conversation until we provide a kind of global view. What would that look like? Well, if you come to our booth across the hall, we’ll be showing of the prototype for Community Buzz. Community Buzz is a collaboration from the Microsoft Research Cambridge Lab, and folks here in Redmond. In Community Buzz, what we’re trying to do is automatically generate a tag cloud. Now, a tag cloud is something that you’re probably familiar with, it’s now almost a clichéd part of a Web 2.0 application. But in this case, the tags are not human authored, they’re machine generated, extracted from the conversations taking place. Second, we’re providing trend line analysis. So, for example here, you can see the trend line between I believe this is the two release versions of Vista. You can see the first release candidate being talked about a great deal, and then trending down, and then the release to manufacturer version being the new topic or focus of conversation.

We’re also looking at delivering ads to these pages, so that you can get contextual ads related to the topics that you’re exploring. Trending is an interesting way to get some overview around these topics, and some of the trends have been very illuminating. So, for example, we’ve looked at things like the Zune versus iPod discussion, and we can see us coming from behind, and perhaps garnering a certain amount of attention there, even against a well-established incumbent. Here we could look at the trend lines for conversation related to three of the major console games in the market today. Now, it’s important for us not to just give you trend lines across a global community content data set, but to begin to segment the content based on sociological analysis of the participants involved in the conversation. In other words, we are now asking the question, who says that. We can explore that by doing social network analysis, the typical graph of dots and lines. Here we’re drawing this graph out of the relationships generated by people talking to one another in a discussion of the Windows XP Server product.

Now, in looking at this, it can be a bit confusing at first, but you may notice that not all of the dots are the same. In fact, we can see here that there are these two clouds, or lobes, of participants, and they have one thing in common, they never reply to anyone, and somebody replied to them. Who replied to them? Well, these guys, and these two dots, and there’s another one down in the corner, these are what we are calling answer person, the people who deliver a pattern of behavior in which they reply to dozens, if not hundreds of others, and rarely get replied to, and reply mostly to the people who posted a question for an initial turn, and never post again. This kind of pattern of analysis allows us to disambiguate content, making sure that when we see a trend line, it is not the trend line generated by spammers, or other kind of flame warriors.

Another way to look at this data is to look at histograms of activity over time, give you a little sense of how different different participants can be. You can see clearly at the top people who only reply, and do so briefly, versus people at the bottom who both reply and initiate, and also contribute large numbers of messages. We have a name for the fellow in the lower right-hand corner, and that would be the needs medication person, otherwise known as the flame warrior. So, our key insight is essentially that most of what is happening online is now authored with a recipient in mind. And that means that software is becoming a relational media. And what we’re trying to do is simple gobble up the traces that are left behind by these interactions, and add some machine generated value based on the patterns that are actually generated in these environments. These patterns turn out to be telling, and a way of giving us an insight into this vast pile of content that’s generating more and more content every day, one that we have a hard time getting some insight into.

Hopefully you’ll come and see us over in the Community Buzz booth, give it a try, and see what kinds of trends jump out at you, and I’ll thank you for your time. (Applause.)

And here’s Kevin Schofield to take us along.

KEVIN SCHOFIELD: All right. It’s been quite the busy morning. And in just a few minutes, we’re going to be opening up the demo floor across the hallway here. I just actually wanted to give a couple of housekeeping notes, and sort of set some context for you here.

Those of you who have been to an industry trade show before, this is going to be nothing like that. We have no marketing people staffing the booths across the hallway. And it’s okay to applaud for that, by the way, if you want. You’re going to meet our real researchers, the ones who actually created the technologies, and I hope you have fascinating conversations because, you know, I know essentially all of them, and they’re all just amazing, fascinating people. But you are going to see  you’re going to have unvarnished conversations with them, and you’re going to see unvarnished technology. What you’re going to see is really research prototypes, some of them may look like they’re product quality, but they’re not. We’ve still got a lot of work we need to do with our product groups before any of those things can actually end up in our customers hands. We really want to make sure that the quality and experience our customers would have with these kinds of technologies is very high. So, I sort of want to set your expectations about that.

Also, I just want to point out that, as with many of you, for many of our researchers who you’ll meet today, English is not their first language, and for us in Microsoft Research there’s an enormous benefit to having a global staff, to be able to hire the best and brightest people from around the world, and that’s something that we really delight in. And the benefit of that for us greatly outweighs the challenges of language issues. But I just want to make sure you’re aware of that, and appreciate your patience and understanding as you have conversations over the course of the day.

So now we can sort of move on. You’ve probably noticed that badges are different colors. We actually have red badges, blue badges, and yellow badges, and we did this intentionally because we packed so many demos in the room across the hallway, there isn’t enough room to put all of you into the room at the same time. So, we’re going to try to stagger you around just a little bit. But we have great things planned for everybody, so nobody is going to feel like they’re sitting around bored.

What I would ask you at this point is for the people with red and blue badges to go out in the hallway, and we have hosts out there who actually have red or blue flags who will escort you to your next session, and those of you with yellow badges, if we just wait a couple of minutes, let the people with red and blue badges go outside and meet their hosts, and sort of move out of the way, you can go straight across the hall into the demo room.

Thank you very much, and enjoy the day.