Remarks by Craig Mundie, chief research and strategy officer, and Tony Hey, corporate vice president of Microsoft Research Connections
Redmond, Wash.
July 18, 2011
Editor’s note:
One of the demos from the event was not included in the transcript below.
TONY HEY: (Applause.) Thank you very much. Thanks very much. Yes, well, we don’t have much of an empire.
Anyway, so it’s great to be here. It’s wonderful to see you all here. So, it’s the 12th Faculty Summit we’ve had at Microsoft Research and its purpose is really it’s a research conference, it’s dialogue, it’s an exchange of views. We’ve made sure, for example, that there is no session which has only Microsoft speakers. There’s a genuine exchange of views. So, that’s the purpose of the event.
This year’s theme is “future world” and what we would like to see is how technologies such as natural user interface, the cloud, machine learning, how is it going to affect the way the world evolves in our daily lives, in our workplace, and how we address significant scientific and social challenges.
So, there are some statistics up on the slide here, and it’s great that we have over 200 first-time attendees, and I hope you have a very interesting day and you’ll give us feedback at the end of the day as to what we did right and what you might like to see next time you come.
We have a very exciting and diverse program. We kick off in a moment with Craig Mundie, who I’ll introduce later. He’s our chief strategy and policy officer, and we have also Peter Lee, who is now our new Redmond Lab research director, and he will be talking about his experiences as a professor in the university, in government research in DARPA, and laterally in industry and Microsoft Research.
On Tuesday we have Rick Rashid presenting our faculty fellows and talking about some of our community programs, and we have a keynote from Rick Szeliski about vision-based natural user interfaces. On Wednesday, we have a closing keynote from Lili Cheng, who is going to tell us about future social experiences. So, I think those are sort of anchor points to the program.
We’ve tried to do something different this year. We’ve had fewer parallel sessions, but we’ve tried to make them exciting and interesting, and we’ve listed some of them on the slide.
It’s a different format. Instead of the cruise being tonight, the boat cruise is tomorrow, for example, which I hope you’ll enjoy, and we look forward to your feedback on the format of the meeting and whether having an extra half day on Wednesday you think this is a good idea or not.
That’s what we’re going to do in just a moment, but first of all I’d like to tell you about Microsoft Research Connections. So, Research Connections is our new name for our collaborative research with universities and what we’re trying to do is work with the academic research community to speed research, improve education, and foster innovation. And that’s our mission statement and we do that through collaborations with scientists trying to use state-of-the-art computer science research to help scientists solve problems that we care about and humanity cares about like disease or environment.
We also believe it’s important that we build the next generation of computer science community because that’s going to be a key community for the world in the future, so we need to inspire computer scientists, as well as research scientists. And we hope, and that’s what some of our projects are trying to see, if we can accelerate scientific exploration using some of our tools. I’ll give some examples from each of these in just a moment.
So, these are a list of projects that we’ve been doing last year and some we’re going to focus on in the coming years. What I’d like to do is just pick out a few highlights of these that you can then go and find out more during the conference.
In computer science, we have Try F#. Now, F# is a functional programming language. I remember in the ’80s, functional programming languages were going to be the future, well, that’s still true. But I think actually we may be making a breakthrough with Try F#, so it’s really got some interesting features and enables you, for example, people in the finance community like it because they can specify units, so they can specify currencies, for example, in the language, and you can’t make mistakes because it’s very type-safe and so on.
We also have been working quite hard to find out how you actually can give academics access to huge data sets, realistic size data sets in which they can try new algorithms, new experiments, and so on. And so Evelyn Viegas, who has been working with the Bing team trying to give Bing data so you can actually do experiments, and there’s the Web N-gram Services, which is currently out there, and later on this week, there will be a session on the Speller Challenge, which is actually a competition to develop new algorithms based on this data.
And also will see in this meeting Project Hawaii, which is actually generically trying to help educators teach if you have a smartphone and cloud services, what sort of cool applications can you put together. So, there will be lots of examples, and there’s something that you can find out. So, those are three examples, and the picture at the bottom is from our spring meeting in Paris on software, which was really very exciting and the Eiffel Tower did its job. Right.
This is some of our science engagements — earth, energy, environment. I’m sure most of you know about Worldwide Telescope, which is a great visualization and exploration tool for astronomers. What we’re now trying to do is instead of looking outwards to look inwards onto the earth and to allow scientists to import their data using a simple add-in for Excel. And so they can use geospatial data with temporal information and so here you see a visualization of earthquakes, that’s part of the Ring of Fire. If it goes on, that’s where we are in Seattle, we’re also on that ring, and you can see ways of rendering different types of data. So, we’re hoping to look at exciting things that help scientists do visualizations, location, depth, magnitude, time, all this sort of information that they should be able to read off fairly simply.
We’re also with our machine-learning scientists looking at applying those in a number of areas, principally around genetics. So, we’ve talked in the past about our work on HIV/AIDS. We believe we have some understanding of how you might make a vaccine that’s effective for HIV/AIDS, and so one of the projects we’re actually doing is doing a trial vaccine. That’s at a very exciting stage, and maybe one of these days we can tell you if it works.
Other things we’re doing, looking at other diseases, which are going to be very important like asthma and diabetes, and, again, using machine learning techniques, they’ve significant discoveries in that space.
And in Brazil, they run their cars on ethanol and they have a lot of sugar cane there, and the question we’ve been looking with researchers in Brazil is can we sequence the whole of the gene of sugar cane, which hasn’t been done? It’s a very large sequence. Can we assemble it and enable them to engineer more efficient ways of producing ethanol from sugar cane? So that’s at the beginning. And what we’re trying to do here is put together a set of tools that will enable you to do that. So, this is one of the examples of the toolkit. This is a toolkit for bioinformatics and how you can actually assemble and compare DNA sequences for these very large sequences. So, we’re trying to do that and that will be available under open source, and we’re hoping to give it to an open source foundation, about which I’ll talk a little bit in a moment.
So, those are applications to science. But just to show that it isn’t all science, we believe that social sciences and the humanities and the arts are also going to have needs of visualization tools and manipulating and managing digital data in conjunction with the sort of artifacts they deal with. So, the humanities and the arts and social sciences are an important place. And I just thought I’d show one.
This is a great example of a project we did at Berkeley with Walter Alvarez. It’s a multidisciplinary — Walter Alvarez is a professor of geology, he’s the son of the Nobel Prize winner particle physicist, and he and his father pieced together from paleontology, from geophysics, from chemistry, an understanding of the death of the dinosaurs by a meteorite. So, these were the guys who did it, and the iridium deposits were the key signature of that event.
So, it’s multidisciplinary where you want to put lots of different types of data. So, this big history agenda is actually trying to compare not only chemistry data, but paleontology data, geographic data, all this stuff on very large time scales from seconds up to billions of years. And how can you actually browse that and compare and do these things in an easy way? And that’s what this Project ChronoZoom is, and we’re taking that forward in the next stage.
But let me just show you one quick demo here of that. This is from their Web page. This is the whole of cosmic history, so down at the bottom here, that’s the Big Bang, the first three minutes, and up here is the whole of evolution of life, prehuman history is about here. We’re like a thickness of a sheet of paper is human history here. And then this is the evolution of the planets, this is the whole accelerating expansion of the universe.
So, you can put huge events on this, and you can go and zoom in on various things. So, you can zoom in on the Big Bang and look in detail about the various features of the Big Bang, go back to cosmic history, go back to the formation of the planets, and so you can see here this is a large amount of the climate, the impact history, production of continental crust, earth’s interior, and then this is the family tree of humanity from DNA emerging.
And so you can go to human prehistory, which is just this little bit up here, emergence of humans, and then you can go to our history, which is this little tiny piece up here. (Laughter.)
OK, so how you put all this stuff together is tools like that I think are going to be interesting, you see it disappearing to the background. So, these sort of tools I think will be exciting and more than just amusing. I think they will actually be real research tools in the future, and that’s one of the things we are looking at with big time.
We also, just to emphasize, we collaborate in all the regions. So, in Asia they have a great e-heritage project with a gigapixel camera doing all sorts of wonderful things with museums and art galleries.
In Brazil, they have lots of forests, as I’m sure you all know, and so we have a rain forest sensors project. And in India, there’s a great tool called Rich Interactive Narratives, you can go and see some demos on DigitalNarratives.net on the Web, which puts together a whole variety of vision technologies from the Deep Zoom you just saw to PhotoSynth where you can explore inside a Buddhist temple, for example, and sort of Worldwide Telescope-type tools.
So, a whole range of technologies, and we hope to make that into a tool that you can use either professional or with a class of school kids, and we believe it has great interest.
In Europe, I’ll just take one project we’re doing with the European Commission, we’re doing a project called Venus C, which is about exploring the use of cloud computing. And one of the examples is this virtual fire project, which took place in Greece after the terrible fires they had on the mainland in Greece some years ago. This is an early warning system, which uses cloud data, cloud computation, risk management modeling and so on, puts it all together, and so you can actually have an early warning, you can see where there might be high risk for fires, and you can do things about it. So, this is just one example. Venus C is a whole range of examples there.
So, what I’d like to talk about is another of the areas that particularly fascinates me, which is the transformation of scholarly communication. In many ways, data is one, but just to look at this statistic from ISI here, approximately 3,000 scientific articles are published per day. Roughly one every ten seconds of a working day. We can now expect that these papers will, each year, cite around five million previous publications. And the rate of production of scientific papers is quadrupling every generation.
So, even if you’re just looking at the specific subsets like biomedical in PubMed Central, two papers a minute are deposited every day in PubMed abstracts. How do we keep tabs on that? How do we alert themselves? How do we find out what research has been done? How do we make research more efficient? How do we avoid duplicating and really wasting resources when the Earth badly needs a solution to disease, to cancer, to HIV, and so on? Really, we’d like to make that process more efficient.
So, it clearly shows that the percentage of human knowledge that one scientist can absorb is rapidly heading to zero.
So, I’d like to draw your attention to one of the sessions on Wednesday, which is about academic search. So, academic search is a powerful search tool for academic papers. It comes from our MSR Asia lab in Beijing where they’ve been working on this project for some time. Historically, we’ve focused on computer science, but we’ve now been increasing that to include other disciplines such as the physics and stuff that’s in the archive repository, or the biomedical literature that’s in PubMed Central, for example.
And what you can do is you can play in many ways. You can find the top papers, you can find the top-cited authors, conferences, journals, and so on. And you can then drill down and find specific papers, you can find out where you can get the journal version, where there is a version on the website of the author and so on. And there’s a nice little tool between relationships between authors. It’s a Visual Explorer tool, and what we’ve shown here is — I tried to think of the most connected computer scientist, and I thought Andy van Dam. So, that’s a picture of Andy’s collaborative things, and you can drill down in any of these and see who they connect with. So, it gives you a whole connectivity of the way the research has spread out from individual researchers and whole networks.
So, with a tool like this, you can then produce, you know, for example, lead tables and you can choose what parameters you like. And so I carefully looked very hard to find a ranking that would give Microsoft Research top, and this is one. So, this is actually — if I take the last five years, that’s important. And look at citations. Microsoft Research beats Stanford, MIT, you know, but you can look at these sorts of things. And these are important rankings because people will look at these sorts of events and impact factors, citations, total number of publications. These are things that governments want to use, they want to use it as a basis for funding, for example, and what we would like to do is make it so you can decide what you want to rank them on. This is just one I found that shows us in a good light, but you may want to find another version which does things differently.
So, in the session on Wednesday, you’ll find an interesting talk by Jevin West from Eigenfactor.org down in UW here in the biology department where using an API that we make available for the academic search, you can build your own tools. You can actually decide what you want. So, here, this is an example here — recommend, map, explore, and rank. And so what Jevin’s been looking at is citation networks and better ways to actually rank papers in significance, and so on.
So, what we’ve been trying to do here is give you the opportunity to build your own recommendation system, your own ranking system because these things are important and they can be important for funding. In the U.K., one of the reasons I left academia was I did three years of ranking of departments. And as Dean of Engineering, my last act was to get every single department in my faculty in the top rank. And obviously the only place to go from there was down, so it was definitely time to leave. (Laughter.) But it’s a very arduous process. And I think really very imperfect tools, just citations, just impact factors are not the whole story, and we’d like to give you the opportunity to build what you think is relevant, and that’s what the session on Wednesday will be about.
So, I do recommend those of you who are interested in those sorts of things, that’s an interesting and exciting development we believe. We’d like to make it so you can help us build something that’s useful for you.
I like this quote. All right? The first thing most of us think about when we hear the word “open” is Windows. Now, do you know who said that? Well, the answer is very interesting. Steve Jobs in October 2010. What he meant was that Windows is an open platform that you can build lots of applications on. And, obviously, it’s in contrast to other companies, shall we say.
OK, that’s meant as a lead-in to the fact that Microsoft is sponsoring an open source foundation called the Outer Curve Foundation. I’m a board member, Microsoft has two board members out of five, and the idea is to have an open source foundation, which is not specific to a specific technology like Eclipse for Java, this is actually technology agnostic. It’s also — you can choose whatever open source license you like — it’s license agnostic, and it’s forge agnostic. You can put your software in any forge you like. It’s a more generic thing, yes, it has projects built around Microsoft products and the community of developing and taking code.
What we’ve developed, they use this metaphor of a museum and a gallery. So, there are various galleries. The gallery that my teams sponsor is the research accelerators gallery. And what we’re trying to do is to actually make it available for scientists to lead projects. So, Project Trident, for example, is a workflow project, and we’re working with Beth Plale in Indiana to try and see if we can make that into a tool that people can use, they can choose to use this, they can interoperate with various other tools that they want to use. So, we’re exploring, and we would like to work with the community to take that further forward.
Similarly with our chemistry add-in for Word, sounds a very trivial thing that you can type equations in Word like CH4 for methane, but because it’s using something like chemical markup language, that means the computer has enough information to give you either methane, CH4, or the chemical structure. So, it’s a very exciting beginning of semantic chemistry, and that’s led by Peter Murray Rust at Cambridge University and we are, with the foundation who own the IP of that, it means that Microsoft people in my team can collaborate with them on that in the way that doesn’t jeopardize any of Microsoft’s rights, and enables them to collaborate in an open source project.
And the longest-standing project is ConferenceXP with Richard Anderson at UW, that’s a collaborative communication tool, which has been around a long time, and now it’s available under an Apache license, these are all standard open source licenses. And what we had hoped to do is put biological foundation in there and other tools as we come on. And if there are things that we should be doing that you’d like us to do, then we would be very happy to take ideas and to work with you to develop galleries that actually you want to do.
So, that’s just a brief run-through of some of the things, the projects we do, some of the tools, and what we’re trying to do to make Microsoft an open platform that people can add things, can extend things, use things that are useful for their research both in computer science and also for scientists doing their research.
So, the last thing I have to say is thank you to Dennis, to Judith, and to Harold, and to many others who worked very hard over the past six months to make this event possible. You’ve all got programs that Dennis and Judith prepared and showed you, so you choose the sessions you want, but don’t feel you have to go to all the sessions, you can use the space for networking, that’s part of the purpose. You can actually meet the researchers in Microsoft Research doing the research that you’re interested in, and this is an opportunity to begin to collaborate and to see how we can develop projects together. So, use the opportunity to find researchers doing things that you want to do and to talk to them. This is a great networking opportunity.
Make sure you have a good time, and I hope the weather stays fine. At the moment it’s behaving itself today, and so thank you very much for coming, thank you for listening, thank you. (Applause.)
Now the meeting really gets started. So, I’d like to introduce Craig Mundie, Microsoft’s chief research and strategy officer. In his role at Microsoft, Craig oversees Microsoft Research and is also responsible for the company’s long-term technology strategy on a much longer time scale than the business groups.
On a personal note, I would like to thank Craig for convincing me to join Microsoft, and for giving me the chance to implement his vision of how science can be transformed by using advanced computing research technologies, and that’s really been a privilege, and I think we’re really making some exciting progress. But, again, I’m grateful to Craig for letting me have the chance to do that.
So, Craig is joining us today to provide us his perspective on the transformation in computing on which our society is embarking. So, Craig, welcome. (Applause.)
CRAIG MUNDIE: Thanks, Tony. Good morning, everyone. I’ve enjoyed the opportunity to talk to this group the last few years. Over that period of time, a lot of the emphasis in my remarks was around the question of how would computing evolve. A major emphasis there was how would the user interface evolve.
And I think this year was an important year for us because for the first time, some of these fundamental changes are no longer research but are really showing up in very important ways in our products.
And I want to talk about that, but I also want to talk about how I think computing will be transformed a bit more broadly today.
There have been ongoing changes, many of them of course driven by hardware and improved software capabilities. Today, we, I think, are beginning to see a change from where data was collected typically for use in a single application and the realization that more and more the ability to compose very large data sets together, dynamically, really is an enabling technology. And so the euphemism about big data I think has gone from something that few people understood to a very important part of that.
The ability to compose it is important, but the ability to have a lot of it, of course, has become really, really important. And the scale with which we can collect this data is changing the way people think about it. It used to be the case that whether you were an individual or an enterprise, you spent quite a bit of your time trying to figure out how to manage the amount of data that you had and accumulated.
But, increasingly, as storage has grown exponentially at relatively constant cost, or even declining cost, we now see the opportunity to retain data over a much longer period of time. And while that brings some additional challenges, for example, around privacy and other areas of regulatory concern, it does create opportunities if we can find a way to balance those interests with the insight that can be gained from this huge amount of data.
I think another big change is one where we grew accustomed to the idea that we had computers on our desks or on our laps, but now that computing is essentially much more diversified in terms of the range of clients. Tablets, phones, televisions, cars, game consoles, you know, there’s just a very wide array of these intelligent devices. And now we don’t so much think about the client-server model as sort of a unified architecture, which I think was very effective in the enterprise environment, we now really can think of that on steroids where the clients with all their diversity are coupled to the cloud in a very, very integrated way.
I think that that’s a transition that’s still underway but one that will continue to be important. And of course the thing that we spent a lot of time talking about here the last few years is the transition from the graphical user interface to the natural user interface where computers essentially become more like us.
And so I think as we look at this array of capabilities, we’re really looking at an opportunity to develop new applications for this information, new ways of allowing people to program it, to some extent to build applications with a lot less of the traditional application development complexity, and yet produce things that are more helpful to people. I think that’s, ultimately, one of our goals to make the computer less of a tool and more of a helper. And I think this collection of technologies is really moving us pretty rapidly in that direction.
With all this big, composable data, you know, we now have a big opportunity to start to learn from this data. We’ve known for a long time that as we developed more and more high-scale machine learning capabilities that we’d be able to train those tools on more and more different data sets and learn things from them.
And of course you see initiatives, for example, like the one in the United States where the Obama Administration created the Data.gov initiative where they basically have said all the departments of government, hey, you know, you’ve been collecting data for a long time, a lot of it is just sort of sequestered in your internal systems, and we want it making an emphasis to make it available.
And so whether that’s for scientific purposes or demographic purposes, the ability to put that data out there and allow other people to build on it for applications that the government never really envisioned I think is just one example of how these kinds of things are starting to take place.
In order to facilitate this and, indeed, do it with a lot less of an orientation on writing programs yourselves, but rather applying the kind of techniques that people have mastered in the use of desktop tools like Excel, one of the groups here has built this research toolkit that we call the Excel DataScope. And what we wanted to do was to give people the familiarity of something like Excel where they know how to express the relationships that they’re interested in analyzing, and even the graphical tools for the visualizations, but we wanted them to be able to both apply it to much larger data sets than would conveniently fit on or be processed by their individual personal computer, and we wanted to be able to also have them compose these other public data sets.
As part of the Azure effort, we have been building sort of a data market where people who have these large, even commercial data sets, are being able to place them into that cloud facility on sort of a pre-staged basis, and therefore make them a lot easier for people to discover and incorporate with their own proprietary data in order to solve these new and interesting problems.
So, in this Excel DataScope environment, what we did was essentially take the extensible ribbon tool bar that is now part of the current Excel product and have built a new ribbon that really is sort of a push-button interface for integrating and analyzing in a fairly automatic way these super-scale data sets in the Azure cloud.
So, I think by giving people these point-and-click kind of tools, we’re able to take the level of understanding that they had historically just about their own data and be able to transparently give them access to a level of computational and storage capabilities and access to these composable, large data sets in ways that were really never possible before.
In particular, this Excel DataScope toolkit is built on or powered by a technology that’s been code-named Daytona, also built by the same extreme computing group here, where they’ve gone out around the company and have brought together, particularly from the Microsoft Research organization, a number of very sophisticated data analytic tools. And they’ve built them and sort of a map-reduced architecture, hosted it on Azure, and essentially are making it available as a service.
So, for all of you, effective today, this underlying Daytona platform is available as a research toolkit, and we’re going to continue to make it fancier over time. And so the kind of thing that we built with the Excel DataScope is something that you could build yourself either as another tool for your students or colleagues, or in fact you could use this underlying platform in a more direct way to do very specific applications.
Even with that, there remain a lot of interesting challenges. The data continues to grow essentially exponentially and so how to continue to refine these algorithms to deal with the scale of that data is important. The latency, even essentially the latency within the cloud, interconnect itself, become a critical component in dealing with it. Many of the techniques that have been learned before in building high-performance clusters I think will find their way in some form or another into the architecture of these Azure cloud environments in order to be able to allow more computational intensity even in these very expanded environments.
There’s obviously lots of challenges and cost and time tradeoffs to be made as you look at analyzing these large data sets, but we think that this is a very good beginning and one that should give the research community global access to a set of tools either for direct consumption or use for things like the Excel DataScope ribbon interface that will empower people to get some of these very large data sets.
Now, I’ve seen some interesting demos of these things where you can just sit there with what appears to be an Excel spreadsheet, you know, just pull down a list of huge data sets like the U.S. Census data and be able to point at it, select a particular subset of information or automatically sample it, you know, do computations, blend it with your own data, and as far as you’re concerned, you’re just manipulating an Excel spreadsheet. But when I think about the scale of the data that is represented behind it and the fact that these analytics are being done for you in literally a matter of seconds, you begin to realize that there’s really something quite magical about that.
We’ve also been looking at applying these same capabilities to develop more valuable capabilities in the business domain as well. One of the businesses we started here about five or six years ago is our Health Solutions Group. And the first thing that we did in building that was recognize that we really wanted to build a data platform. And what you see here are lots of samples of data that were taken from the different departments, if you will, of a big hospital group in the Washington, D.C.-area who was our development partner over the years in perfecting this product called Amalga.
What it does is essentially creates a means of ingesting all of the data from all of the sources that exist within a hospital. That’s the business information, the operational information, the workflow information, and the clinical and laboratory data. And by integrating this into one very, very high-scale database and aggregating it over a long period of time, we have probably one of the few examples in the world where we have a large-scale data asset that touches all of this sort of widely varying information type.
So, we begin to ask questions: how we would we use machine learning and these very high-scale data platforms to do things that are valuable in either improving quality or lowering cost within the healthcare environment? That’s a challenge that the U.S. and virtually every other country face these days, which is people want better health and yet it’s becoming exceedingly expensive.
So, one of the experiments that we’ve been doing is to look to the data to begin to answer the questions about the readmission of people who have been in the hospital. It’s one of the greatest cost drivers, certainly, in the U.S. healthcare system where you’re admitted to the hospital for one particular reason, you know you’re treated for that and then you’re discharged. And then for reasons that people haven’t really been able to understand, within somewhere between three and 30 days, a relatively high percentage of people are readmitted to the hospital, sometimes for a recurrence of what they came in for, but very frequently for something else. And the question is, you know, why does that happen?
Well, there have been a lot of theories, some certainly correct theories about why that would be true. But if that was the only reason, then you’d say, well, we’d fix them. You wouldn’t see this recurring readmission. And, yet, it’s been obvious that we do see the readmission. So, we did a fascinating experiment, and I’ll just show you a snippet here, one example. This is sort of a set of data that is the output of a tool that predicts the readmission of current patients.
So, what they did is we had ten years of this data, and we basically trained the machine learning system on it and said answer the question, you know, why did — you know, look at all the people who got readmitted, out of 300,000 hospital visits, and look at all the data and see if you can find patterns that correlate with the readmission. And then based on that, develop a model that you could apply to the workflow today that would essentially attempt to predict who is in the hospital today and what the likelihood is that they will be readmitted, and most importantly, why they might be readmitted.
Because if you knew those things, then while you’re there today, you’d fix them today. And so the top row in this example, it shows that in running this model on a day’s worth of data, the top-most patient has a 38 percent probability of being readmitted. That’s pretty high.
And so in the example that I’m showing you here, these were patients who were admitted for congestive heart failure and what the analysis showed that was really never — at least systematically understood by the medical community — was that if patients admitted for congestive heart failure had previously a gastric ulcer or while they were in the hospital were being given any gastro-intestinal drugs, their probability of readmission rose dramatically.
And you would say, well, you know, what does their stomach or even historically their stomach have to do with this congestive heart failure? And it turns out, I guess, there’s some interaction between the drugs that people with GI problems get treated with and the way that you’re treated for congestive heart failure. But, basically, cardiologist doesn’t think much about your stomach, and the stomach guy doesn’t think much about your heart. And so if they happen to coincide, you end up back in the hospital.
Now one thing that was fascinating was if you were diagnosed clinically as being depressed, you had a dramatically higher incidence of readmission, and that also could have been a drug-related thing, or in fact just the mental state of the person and how they cope after discharge with the current state of affairs.
But to some extent, no one was really looking for these things, and certainly no one had the ability to look back through all of the patient’s history where other treatments are going on and in real time correlate that in order to eliminate these problems at the point that it happened.
So, by doing this, you know, we believe we have proven now that we can dramatically improve these outcomes. Hey, if you’re in the hospital and you don’t come back, that’s a better outcome. And of course the cost associated with the second event never occur. And so at very low marginal cost, in essence you’re not doing anything except training the old computer system on your data that you already have in order to predict this. And if you integrate it into the workflow, every morning you come in and it says, OK, today Craig in Room 204 there is likely to be readmitted because he actually has a GI infection that’s being treated too, so why don’t you pay special attention to that.
And I think that this kind of thing shows the power of doing this. Clearly, here, you’re integrating across a lot of sensor platforms. You know, when you think about the number of sensors and devices that you encounter in the hospital. Being able to monitor these things in real time is just really incredible, and of course that’s one of the things that we want to do more and more. But building predictive models and being able to do these for a wider and wider array of disease cases I think is obviously a challenge.
You know, the other list of challenges that we put on the screen here, these are things that we can clearly see as problems yet to be solved. And so for every step that we take in this direction, and while we can already see some potentially immediate benefits, you know, we begin to bump into a new class of research issues that will either impinge on our ability to deploy these things like the privacy and security issues, or where there’s just a new level of semantics associated with the problem that we have to work on if we’re really going to convert these things from research assets into deployable technologies.
But they’re very exciting, and it shows how this sort of big data environment I think is going to go from sort of an offline analytics environment to essentially a real-time phenomenon that will be integrated into very important business processes.
I want to move on a little bit and talk about the natural user interfaces. This is something that I have certainly been a big advocate for. We’ve been gradually moving in this direction in the research domain for many years, certainly at Microsoft and elsewhere. We’ve got well over a decade in investment in trying to emulate all the human senses, hearing, vision, speech, et cetera. And of course we’ve seen the world move beyond the traditional point-and-click and particularly with the miniaturization and focus on mobility from the phones, the direct manipulation of the graphical interface through touch has become very, very important as well.
But so far, and I’ve said this many times before, each of these things was most frequently used as an alternative way to operate the graphical user interface. Now, as a sort of step back fundamentally, and ask yourself: Is there just a better way for people to interface with computers? At least for a class of problems that go beyond the ones to which we’ve employed them in the past. And we believe that this was true.
So, in the time since the last one of these summit meetings, for us, a very profound thing happened, which was the launch of Kinect. You know, Kinect was the capability to combine voice in particular, but machine vision, very importantly, to create a sensor that we could fuse the input together in real time and create an alternative way for people to interface, in this case, with the Xbox game console.
And you know, it’s interesting if we go back and think about this, it’s I guess about three and a half years ago now, the business group who builds the Xbox were looking at both the desire to change the demographics of their customer base beyond the sort of males 12 to 30. They wanted to expand that out both in terms of gender and age. And they wanted to have a lot more ability for casual gaming. And by that we mean the ability for people to get into the game quickly and enjoy it without essentially the challenges of mastering the traditional complex game controller.
The game controller as we’ve known it is, in my view, much more like a musical instrument than anything else. I mean, you know, you can become a virtuoso at playing this instrument, but only with some skills and gifts and a hell of a lot of time invested. And what we really need is a way for people to take what they already know and just get in there. You know, in a sense, it’s a bit more like karaoke than playing the piano.
And so Kinect was a dream that they had, which was controllerless gaming. You know, they’d seen the progress that, for example, Nintendo had made by introducing the Wii, a motion-oriented controller, relatively simple, and a set of games that were evolving around that. But it became clear to them that if we really wanted to have a breakthrough in this area, we really should connect the dots, if you will, and move beyond that. And the obvious place to land would be to have no controller.
And after thinking about it for a while, they pretty much concluded that it didn’t seem like it was really possible at that point. But they came over and sat down with the MSR teams and said, “Hey, here’s the problem we want to solve. We want to have controllerless gaming. Could we think of ways that that might happen?”
And so we started to look at it and, indeed, we found that there were a whole array of technologies, many of them having been developed in MSR for years, none of them specifically focused at this problem of controllerless gaming, obviously. But when we brought them together, what appeared to be impossible became possible. And as soon as that happened, of course, this thing became a phenomenon. The Kinect sensor got the Guinness Book of World Records earlier this year for the fastest zero to eight million of anything that’s ever been built and sold. And I think it showed that it sort of hit a nerve in a positive way with the gaming public, and they could see that there was real value in this.
For us, it was just the tip of the iceberg. But having put it out there, it also became clear that many people immediately began to, if you will, fantasize about, well, what could you do with this thing? You know, beyond what Microsoft was clearly doing with the first genre of games.
And so, you know, almost immediately the device was taken off the Xbox. We intentionally had not really tried to protect it in any way. It had a standard USB plug on the end. And you saw the community develop within a week or two a very primitive set of interfaces on PCs that would allow them to hook up the camera and begin to explore it.
So, you know, this light-saber guy was on YouTube pretty early on in the process and he said, “OK, this is what we could do.” Well, it turns out, there were obviously a bunch of Trekkies around Microsoft too, and so they said, “Well, we actually have a vision for this.” And I guess about eight weeks ago now, we actually announced a product, which has the light sabers. And this thing will be part of the fall lineup of the new wave of games for the Kinect. So, this is sort of the commercial version of what happened with the guy on the left.
But think what it showed us was that there was, in fact, a huge array of opportunities for the deployment of this type of technology, that putting it out on the game console was just the first step in terms of getting people to focus on it.
You know, here’s a list of things that when we looked around came from three labs on three continents, you know, from about eight or nine different groups. And it was the synthesis from all that research that made this thing possible. And, in fact, almost all of those people sort of embedded themselves with the production people for almost three years because, yes, we had a research result, but then trying to meet the constraints of deploying this thing for a device that had to sell at Best Buy for $149, that was essentially an additional set of challenges. And a huge amount of collaboration and refinement was done by the research community here at Microsoft, coupled to the product group, and we’re extremely proud of that result.
There are clearly a huge array of problems that have nothing to do with specifically the sensor or even the basic algorithms. Trying to deal with the sensor fusion and give the right experience in high ambient noise or variable light environments.
One of the problems that we’ve solved part of but there are certainly a broad class of them got the name here called the “annoying little brother” problem. And this is the one where, you know, you’re standing there in front of the game and your little brother comes in and he stands behind you and he starts going like this too. And you say, OK, you know, that cannot confuse the game. The game has to be able to distinguish you from your annoying little brother and how does that happen?
Well, it turns out it happens not because we try to have the game developer looking at the raw image output and trying to say, you know, OK, that thing moving, is that an arm or not? What really happened was we built a model of the human skeletal system. And, you know, through a whole chain of processing techniques, you know, what we build and give to the developer is a fairly robust skeletal model. Basically, the 42 major joints were what we mapped in that first one. We could do it for four people simultaneously at 30 hertz.
We’re doing this with essentially a few percentage points of the CPU power of the Xbox because the game gets everything else. And so one of the challenges was how you get this incredible amount of processing done with no specialty circuitry in the camera because that would make it too expensive. So, we have pretty much a raw data stream that goes in there and has to be processed by a tiny corner of the CPU, in essence. But we solved that problem, and of course there are many, many more that ensue.
You know, I showed you one of the examples of the kind of things that people do when you let them think broadly about this, whether it’s in the gaming community or something else. But one of the things that I think is very telling about the power of this natural user interaction — I’m going to demonstrate with the next video.
This one happened fairly recently. And it’s an unmodified version of the game console using essentially just some of the standard games. But there’s a lot of interest in the medical community in trying to use this ability for people to place themselves onto the screen or do things with other people socially that’s very powerful. And I was really touched by this particular video, and I wanted to share it for you as a way of thinking about why are we doing all this stuff? And, you know, what kind of creativity is out there for the application of computers in this more natural realm? So let me play this for you.
(Video segment.)
CRAIG MUNDIE: You know, I think that this is a great example of why we do all the things that we do. And every time I look at these kind of applications, it really gets me charged up to go on and do more.
One of the things I found really interesting was that the school came to us to tell us about this. You know, we didn’t go to them. They said, “Hey, you know, this is just unbelievable, and you need to help us get other people to understand the power of these kinds of technologies.”
You know, they mention here the social interaction. I think that everybody talks about social networking today, but I think that this is really the beginning of a much broader form of computer-mediated interaction that as we move from let’s start it as the phone and then in a passive way through television, you know, we’re now — now we have sort of very weak forms of interaction at a distance with video conferencing. But a lot of these things lack the naturalness that I think really lets people just sort of suspend disbelief and get into the thing.
So, we’ve been also thinking about how can we do more of that? So, about the time we started the Kinect development, you know, being sort of personally passionate about this telepresence concept and the belief that it would be also important, I started a project that is now called Avatar Kinect. We announced that this thing would be released in Ballmer’s talk at CES in January, and in fact, this month — in the few days that remain — this thing will go live worldwide.
What we wanted to do was essentially go beyond just the skeletal animation and create a product that would be the first step toward three-dimensional, multiparty, telepresent interaction, and to use the Kinect technology to do that.
We started with the game environment, one, because that’s where we had Kinect, two, at this stage, we have to focus on caricature-type avatars because there’s a real challenge in ultimately crossing what people call the “uncanny valley” into the place where you could have photo-real avatars. And if you get stuck in the middle someplace, it’s kind of weird, and people don’t like it, there’s cognitive dissonance.
But humans are incredibly capable of taking cues from cartoon characters. They grow up with them; they sense very basic emotional elements from these things. And so it turns out that for all the reasons cartoon characters and caricatures actually are meaningful to people, so are these avatars. And they’ve gone from being sort of a quirky thing in the early days of Xbox to something that’s a very integral part of how people interact in that environment.
So, there’s probably 100 or 200 million avatars on Xboxes that have already been made, and of course a fairly young demographic for the population. And so we decided, well, if we want to experiment in this space, let’s start with the Xbox community and create this product.
So, I’m going to play you just a little trailer for this product that will be coming out very, very soon so you get an idea if you haven’t seen it before.
(Avatar Kinect video segment.)
CRAIG MUNDIE: Last week, I think we did a first related to this. I taped a show with Maria Bartiromo on CNBC, The Wall Street Journal Report, which aired yesterday some of you may have seen it where we actually did basically the same kind of thing in an interview set, and we did half the interview live, face to face, and the other half of the interview we did it as avatars. And they basically put it all together, tried to help people understand, you know, what’s it going to be like to be able to have this multiparty interaction in business? I think that was certainly the first time anybody I know on national television has done an interview by an avatar.
But, to do this, we actually had to go beyond what we did in the gaming environment. And we had to get to looking at animating your face, because if you want to have any kind of natural interaction that’s not just moving around, we had to get the facial animation. This is a real challenge because the resolution of these sensors, particularly in a low-light environment, isn’t all that good. And so it’s much more difficult to just say, “Oh, look, let’s see the pixels of my eyebrow and figure out where it goes.”
And so, we developed a facial mesh model, a 3-D model of that. We basically take the points that we can reliably get out of the face, and then we know that your face can’t contort arbitrarily. And so, by moving these faces, we’re essentially using the control points on the mesh, and then that gets mapped onto the avatars. So, let me show you a little bit how this works.
(Video segment.)
CRAIG MUNDIE: So, this moves us another step forward, where we not only can get your general coarse body movement, but we can start to get your facial movements correct. And by mapping them onto the features of the Xbox Avatars, which we embellished in the last generation to give just enough facial elements to convey some of the major human emotions, you can see these things.
So, by the movement of the eyebrows, and a little bit the eyes, and mostly the mouth, you’re able to essentially correlate that with what people are really doing in front of the sensor.
But, to do this, there were a lot of problems that had to be solved, how to capture the expression and portray it, how to handle all of the sensor data in a very low latency environment, because you don’t want that weird sensation where sort of the lip movement doesn’t match the audio track. So, there was a huge array of things that have gotten solved. But at least the V.1 product is going to be out there in the next few weeks, and if you’re an Xbox LIVE Gold subscriber, you can essentially have multiparty meetings with your friends anywhere in the world, up to eight people at a time, as your avatars.
Clearly, there’s just a ton of research problems that this begs, from how to move the avatar to be more photo real over time, at which point you could say these meetings aren’t just social and for fun anymore, they are real meetings. I believe someday, we’ll be able to have this meeting, and you’ll all just be sitting in your office, and I’ll look out there, and I’ll see all of you just like this, except none of us will really be here. It will all be a 3-D stage.
But to make that happen obviously requires a lot more effort in many dimensions. Another thing we’re working on, and we did some work in Avatar Kinect, is tracking the hands. In the major games, we stopped at the wrist, because at that distance, there’s not enough sensor resolution to do the individual digits of your hand. But, when you get a little up close, and you’re not moving so fast, we can basically even get down to the hands and finger movements. So, I think all of these things will prove, obviously, the silicon environment will improve the sensor technology, and we’ll keep moving that along, too.
Once you build this thing, you immediately then realize that the next problem is, we created in this case I think 16 stages that range from some for little kids, to the tailgate party, to the TV interview set, and you basically can go have your avatars meet in any of those select stages to have the current set of interactions. But, obviously if this is going to become broader, or be applied in ultimately more commercial applications, you don’t want to just go look at the sets that we have, you want to make your own.
[DEMO CONTENT EDITED OUT]
So, just to close, I want to show you a hypothetical demo that we put together to help people understand, again, in this case I chose a medical kind of setting for the application to say, how will all this sensor fusion, natural interaction, camera, voice, gesture, and touch all come together in what might be the office, or doctor’s office of the future?
One of the things we believe is whether the surfaces are horizontal or vertical like this one, they will ultimately come equipped with this type of machine vision capability, array microphonics, and touch capability. And it will become very normal for people to interact with the computer system through these mechanisms.
When people stand in front of an Xbox with Kinect, they don’t think about the technology at all. In essence the thing could be completely invisible. As far as they’re concerned, they’re just looking at this big screen, they don’t think about anything else. And that’s the way I think computing has to be for us all in the future, is that you shouldn’t address the computer, or have to go to the computer to get something done. The computer will be sort of all around you, and you’ll have many points of presence for the computational capabilities that we have.
So, we built a hypothetical scenario here. Let’s go back — can you start this thing over again for me? OK. Now, can we just go back to the beginning? I’ll tell you a little bit while they start that, what I’m going to try to show you.
We think about a world where people want to not only interact with the computer this way, but where they want to be able to interact with other people. I showed you the telepresence model. Here I’m going to show you some different ones. But, here the camera is there, and when I walk in front of it, the system should start and show me some people.
When I walk in front, the system realizes that I’m here now and it can actually make these things so that they have a lot more granularity. When I just walk in the room it might be, if it doesn’t know it’s me, or it knows I’m standing far away, the interface may present thing in a much coarser granularity.
So, here I might be a doctor, and I want to confer about these subjects, or patients, with somebody else. So, I can say I want to confer with one of my colleagues, and if I can step back, now I’m using the camera more in the telepresence model, and I want to interact with one of my colleagues. Here I have a gestural interface, so I could essentially pick things by pointing at it, much as I might in the Xbox.
But, here I might use the speech input. System, select the patients with BMI greater than 33. So, it might pick these people out. One of the things that I’m part of with this colleague that we’re discussing is essentially a medical research trial for a new clinical trial. So, I can say, ”System, select the people that qualify for the new metabolism trials.” So, it might pick these five. In this case I can look at these people; I know whether I have dealt with them before. We may have primed all of them to be considered if there were trials available. I can say, “System, enroll these people in the trial.”
So, it might go out, collect all their data, anonymize it, send them all an email to get their actual permission. So, the idea is that more and more of the workflow automation happens within the computing environment, but where you sort of mobilize it through these kind of interfaces I think will become more the norm compared to the point and click that we have in a traditional PC environment.
Here I might want to look at an individual patient. System, show me data on Lori Penor. So, Lori might be a patient that we’ve been dealing with for a while. In this case she’s a diabetic. We’ve had her on a program, both in terms of trying to manage her diabetic condition, and also sort of the psychological issues that she’s had related to that. This may contain a whole lot of data that comes from the clinical environment, some maybe from, for example, our HealthVault service, which it might be a consumer place where all of her data, like her Polar Heart Monitor and other things, or a scale is creating a clinical record that’s continuous, as opposed to just episodic when she visits the doctors.
If I look at this and want to look at that’s her weight. I can touch and say, show me her caloric intake, as we’ve been tracking it. And here it basically points out that there’s something wrong, or changing. She sprained her ankle and reported that her activity levels have declined quite dramatically. I can essentially drag them up onto the chart and sort of look at them in a comparative way. And say, this is clearly having an effect on her weight loss.
But, here’s an example where when we started talking I would go out and I’d talk to people about the Avatar Kinect product. I would say, “Hey, the dream we have is to get these avatars to be photo real.” And one of the things that happened was the people in the medical community said, “Hey, you know, we have a lot of applications where we don’t want them to be photo real. We want them to stay caricatures, because there’s a lot of cases where the ability to do this anonymously is really valuable.” So, we took some of the things that were described to us, and we built a prototype here, which you could think of as Avatar Kinect plus one, at least. And I’m going to show you a way of thinking about how you can use this thing in a more medical context.
So, here we took the same basic technology of Avatar Kinect. We sort of upgraded the quality of the avatars to be slightly more photo real, and what we thought about is sort of a group session here. So, you’ve got a moderator, you’ve got people who are actually sitting at home, but they’re having a group therapy session. But as a result they’re able to have a social interaction and maintain some anonymity.
So, here the sessions can be transcribed, even those Avatar Kinect sessions, they can all be recorded. So, you can record them and email them to your friends or share them online. So, here we say, “Well, we’ll record these things and then the doctors can essentially come back and review them.”
So, for example, if I touch on this one, I can see this therapy session as it was recorded, even though none of them were actually there. And I can see how they react. There can be a moderator who is essentially coaching these people, or looking at them, or perhaps telling me I should worry about the way they’re thinking about this.
But, of course, the person I cared about in this case was Lori, not this guy that happened to be talking at the time. So, the nice thing about these things being a 3-D environment is that there’s no fixed viewpoint. You can essentially run the model again and look at it from any place you want.
So, in this case this is Lori, and I’m worried about her and whether she’s interacting. So, I could essentially play it again. And this time, even though the guy talking is over here, I can decide I really want to observe her. And since her basic body pose and facial elements are being sort of captured in this process, even though essentially nobody else is there, I might look at this and say she appears to be really emotionally disengaged. And I might decide to refer her to one of my colleagues.
So, the whole idea of building these new applications, where workflow and analytics are all presented in this kind of environment I think is the future of a lot of business process automation. And while it’s easy to think about or fantasize, at least, about how these things might happen in that kind of medical environment, I think that this will happen virtually everywhere.
So, whether it’s changing the way, the modalities by which we interact with the computer, thinking about how big data and these cloud assets are going to get coupled together, or finding just completely radically new ways of thinking about what we should be doing with these computer systems, all of these things are really becoming much more tangible and possible today than they have been certainly in my 40-odd years of working on this. And so I think it’s just been an incredibly exciting time. And yet, each step forward demonstrates that there’s so much yet to be done.
So, let me stop there and use the time we have left available for some Q&A. Thank you very much. (Applause.)
Anybody with a question, comment? OK, right here. I think there are some microphones, if you just wait we’ll hand you one so everybody can hear you.
QUESTION: I’m in a group that’s looking at medical devices and we actually started looking at the Kinect, possibly using it to tell things like if a patient was getting out of bed, and doing remote sensing. Who would we talk to, are there people we can talk to at Microsoft to get more because we’re just learning how all that works. And we’re pretty interested in medical applications. There might be people that want to talk to us.
CRAIG MUNDIE: One of the things that we just released was the SDK for this. Microsoft Research has built a research toolkit for the Kinect sensor that includes a lot of these high-level processing capabilities. When the community built their own, they really were just providing you the raw sensor data, but much of the stuff that we’ve learned and built to give to the game developer, we’ve put into this kit, including the ability to deal with the array microphone.
So I think there are actually two sessions at this summit on the SDK for the Kinect, one on the sensor fusion capability and the kit itself, and another one I think on some of the application layer stuff. So, I would say first, go to those two sessions, there will be a little tag there you can just scan with your phone or your badge, and they’ll hook you up to the right people who can talk about that.
Yes, sir.
QUESTION: Can you extend your vision a little bit into smartphones and tablets and other devices that might be used in this environment?
CRAIG MUNDIE: I guess my belief is that all of the things that I talked about here, that today either have to be in a separate device like the Kinect sensor, there’s no reason to think that these won’t go through the same progression that we’ve seen with other sensor technologies. I could dream about a day where anywhere today that you have a camera, which is the back of your cell phone, or the bezel of your laptop, there’s no reason to think that over time that camera shouldn’t be this kind of camera. And there’s obviously a lot of work yet to go to produce that level of miniaturization, but I don’t see any fundamental reason to think that wouldn’t happen. And therefore many of these things I think will be available in the mobile environment in one form or another.
One of the things that I find really particularly interesting about the Avatar Kinect in the mobile environment is that it’s almost a zero bandwidth requirement. In fact, it takes little more than the bandwidth of the voice call itself in order to be able to animate the avatar in real time completely, because all you’re sending is the movement commands, and then the avatar animates based on the computation at the other end. And so, the ability to just hold a phone or a tablet and stick it out there, and essentially have a telepresence meeting even in an environment where you have very, very weak mobile connectivity bandwidth I think is an interesting future application.
Number three back there?
QUESTION: So, we’ve been working with the technologies behind Kinect, machine vision, computer vision, for a long time. So, we’re really excited about the computer vision aspects of Kinect. You mentioned that the Kinect has gone from zero to eight million of anything in terms of records, and that’s really fascinating.
So, what is your experience can you tell us more about that information, what is the demographics, is that across countries, is it
CRAIG MUNDIE: I forget the exact number, but basically Kinect is available everywhere that Xbox is now. So, I forget the exact number of countries, but it’s a fairly large number. At the launch, we were building them as fast as we could. In fact, our original guess for that first two months was our hope was, maybe we could find five million buyers. But, in fact, in the first 60 days, we had eight million. So we had to hurry up and really push things onto airplanes form the factories and get them distributed around the world. But we did manage to sell eight million of them in those first 60 days.
And it has had the effect of dramatically broadening the demographic. In the Kinect game series, there is probably at least as many girls and women who are playing these games as there are males. And, of course, that was never true in the traditional Xbox environment. It’s also true that not only is the gender mix more balanced, but the age group changed dramatically.
There’s a number of things you can do with the Avatar Kinect. One of them is, it has a Kinect videoconference facility. And what’s interesting about it is that, while it’s just more traditional videoconferencing, the camera tracks you as you move around. So, you’re now sitting some significant distance away from the screen. So, many of the problems that you have with traditional sort of PC-based videoconferencing, where when you’re so close the angular displacement of the camera from where your gaze is gives you that very weird sensation. When you’re far back, that angle becomes just a couple of degrees, and the gaze problem is sort of automatically corrected.
In addition, we used the beam forming on the array microphone, so that even though you’re sitting maybe 10 feet away, the signal to noise ratio is dramatically improved. In fact, one of the things that’s happening now is, we’re starting to fuse the sensors together. So, in the newest version of the software we’re doing, we use the machine vision to figure out where your head is. And then we actually aim the we track your head with the beam of the array microphone. So, even before you start to talk, the thing knows where you are, and is only listening to your mouth. So, even though you’ve got the vacuum cleaner, and other things running in the house, you know those are going to be loud. The beam basically focuses only on you. And so there’s a lot of very interesting things happening. But it has had the effect of broadening the demographic, both in age and gender.
The lady right over here.
QUESTION: Have you thought of extending the avatars to the SDK itself, so instead of just coding by typing, you code or you build models with your avatar?
CRAIG MUNDIE: It’s not something I’ve thought about, but you’re welcome to try this afternoon. We actually have done a lot of work in various parts of MSR thinking about sort of synthesizing programs from other than traditional means of enter the code, and compile it, and get a program. So, we’re quite interested in that, although I haven’t, myself, seen you can ask Rick, he may know if there’s anybody who is thinking about how the avatar might play a role in code production.
When I did this interview with Maria Bartiromo at the end, she said, “This is pretty good. Maybe I won’t have to come to work in the future. I’ll just sit here.” So, I mean, maybe you can just send your avatar to type on a keyboard for you.
Any other questions? OK, in the back there, last question.
QUESTION: Have you ever thought of a world where we don’t have to get out of the house, and just be inside and play out in the courtyard without having to physically be there, would that be not fun?
CRAIG MUNDIE: I guess it depends on your vantage point. But, one of the things that people actually, at least parents, like about Kinect is that the kids actually get up and move. There’s been a lot of concern are you, one, isolated; and, two, sedentary as you play these games? And so the fact is, in the Kinect environment, another thing that we’ve observed is that even within the room you’re in, it’s a far more social environment. Instead of sitting there and watching somebody move the joystick, and see what happens on the screen, there’s never much social about that. So, you can have multiparty games, but the local socialization component was pretty low.
In the Kinect environment, it’s very high. People get a real charge out of watching the people participate, and being able to sort of watch simultaneously their physical actions and their actions in the game, and that of other people. And so, I think that the ability, or the necessity, to move is by many viewed as an advantage; so, I don’t know whether getting away from actually having to go outside will be viewed by everybody as a good thing, but certainly you can experiment with that, too.
Well, thanks for your attention. I hope you enjoy the conference, and thanks for coming and visiting us. Bye bye. (Applause.)
TONY HEY: All I have to say is that it’s just after 10:30, the session starts at 11:00, so make sure you pick your sessions, enjoy yourselves, and have a great couple of days. Thank you very much. See you at 11 o’clock.
END