Kevin Schofield: Convergence 2009

Remarks by Kevin Schofield, General Manager Microsoft Research, on research into new sensing capabilities, user interfaces, and imagery.
New Orleans, La.
March 12, 2009

ANNOUNCER: Ladies and gentlemen, please welcome General Manager Microsoft Business Solutions Marketing Chris Caren.

CHRIS CAREN: Thank you, Band. Love that intro. Good way to wake up, better than coffee. Anyways, good morning everyone, thanks for coming. I hope you’re enjoying Convergence. Is everyone having a good time so far? (Cheers and applause.) All right! Very good.

Well, we’re just past halfway through the event, and this is our second and final keynote of the event. And in a few minutes I’m going to hand things over to Kevin Schofield from Microsoft Research. I wanted to also quickly just sort of showcase an event that went on in parallel to Convergence. I mentioned it briefly on Tuesday. About 400 of us went out and worked on Monday at Habitat for Humanity projects out in the Ninth Ward helping to redevelop housing in some of the areas hardest hit by Katrina and Rita. We had in total about 150 partners, about 150 customers, some of whom were not yet customers on Monday, and hopefully more are customers by now, and about 100 Microsoft employees. So 400 folks in general did some amazing work, helped build a few homes from where they were sort of coming together in the morning to a lot more close to complete by end of day.

One interesting and kind of for me amazing part of the day was this: A woman came up who was going to live in one of the houses we were building, and she drove up to check on the work and thank us, and she actually still lives in her car. So for her moving into this new home actually is returning to a normal way of living. So it just shows you some of the great impact of the work that we’ve done.

So we’ve put together a quick three-minute video I wanted to show now that just overviews the day, the work that was done. It was a lot of fun for all of us, and we wanted just to show up with the video to give you guys a feeling for what happened on Monday. So, let’s roll the video.

(Video segment.)

It’s a great event. And thanks again to all of you who came out on Monday, and we’re heading back here again in July with Worldwide Partner Conference, and hoping to have a similar event then, and continue to give back as much as we can.

So I wanted, before I hand things over to Kevin, run through a couple of fun, interesting facts about Convergence for those of you that care. We always do this. And all this information that’s interesting to know, and hopefully some of this helps you think about how to spend the remaining time you have at the event in the next day-and-a-half. So top five interesting facts.

Fact No. 1, there were over 500 million steps taken in this convention center by attendees. You could almost think of the center, as nice as it is, almost as a grand hallway versus a convention center. But we got our exercise and got to learn at the same time. That’s the good news.

No. 4, as far as donations made so far, remember any of you completing an evaluation, you give a dollar to the Boys and Girls Clubs. There’s about 7,000 of us here today, so we have a long ways to go to get evaluations complete. It’s really useful for us to get learning on how to improve the event, so please if you haven’t complete your evaluation and again it’s for a prize to win $5,000 and also help us give a dollar to the Boys and Girls Clubs.

No. 3, Convergence Connection Meetings, this is going to be a way for you go and meet with a Microsoft person, a peer of yours who uses your products, a peer of yours from the industry, someone you might not otherwise meet at the event who you can trade thoughts with, borrow some ideas from, 1,200 to date. You can still sign up for connection meetings, so if you haven’t and you’re curious, please go ahead and sign up, it’s a really valuable part of the conference, and we encourage you to use that if you haven’t already.

No. 2, another piece of good news, 250 points higher is the Dow Jones today at close of market open and we started the event. (Applause.) Just months ago, I think that would have been a 2 percent rise in the Dow and unfortunately now it is a 5% rise, but it is moving in the right direction. We’ll take partial credit for that, all of us.

And finally, last night we had a very good piece of good news, despite us being here in New Orleans, we did a quick headcount and found and estimated there were fewer arrests, fewer people lost or still out on Bourbon Street as of 8 o’clock this morning.

So, anyway, some good thoughts about the event. Again, evaluations please do it if you haven’t already, and the connection meetings great way to get some more value from the event if you haven’t already.

So without further ado, I want to talk about next year, and reveal the city and dates for next year’s Convergence. I’m happy to report, a year from now we’ll all be meeting up in the beautiful city of Atlanta. (Applause.) So April 25-28 will be the dates for Convergence 2010. There is a way for you to sign up on our Web site to get alerts on the event, and updates. But please if you can try and come back and join us again in Atlanta in about a year’s time, and we’ll look forward to another successful event then.

So this is the last time we’re together as a group. Kevin is going to come out and give his keynote. We’re going to have a Q&A after that with Kevin. So one quick ask that if you have questions that pop in your head during Kevin’s keynote, save them, we have some microphones around the auditorium, and we’ll be taking some questions at the end of the talk. So enjoy the keynote, it’s going to be fantastic, and we’ll see you back in a few minutes for Q&A.

Thank you.

ANNOUNCER: Ladies and gentlemen, please welcome General Manager Microsoft Research Kevin Schofield.

KEVIN SCHOFIELD: Good morning. All right. First, most important, give it up for the band, are they awesome or what? (Cheers and applause.) Thanks, guys. It makes you feel like David Letterman up here, right, we’ve got a studio audience of 5,000 of my closest friends, we’ve got an awesome band, we’ve got a great morning here. I’m going to start with a half-hour lecture on a whiteboard over here  no, we’re not going to get into that.

I’m going to talk about exploration, which is an important time-honored, oftentimes legendary human endeavor. I want to, with your permission, give a little bit of a historical perspective on exploration, and the role of technology in changing the way we explore our world, our universe, if you will, over time, and from there really talk about beyond the roles that technology has made and changed how it relates to the way we live and work today, and share with you some Microsoft Research technologies that have been cooking up in our labs over the last couple of years, and we’re going to have a lot of fun along the way.

Just a couple of prelaunch activities here, and I figure we should probably sort of spend a couple of minutes telling you a little bit about Microsoft Research, the organization that I work within. We’re the basic research arm of Microsoft. So that’s within R&D, we’re the R part. It’s actually a pretty small R compared to the D. Overall Microsoft has about 30,000 or so people working on product development, building our great products that you all enjoy. We have a research lab component of about 850 people or so. And you can sort of think of us as the world’s largest computer science department. In fact, that’s actually pretty close to the truth. If you were to put the computer science departments of MIT, Stanford, UC-Berkeley together, we’re actually bigger than all three of them put together. So we’re a pretty big organization.

We have six labs around the world. About half of our folks are in the United States. Our biggest lab is in Redmond, Washington, which should be no surprise to anybody. We’ve got folks down in Silicon Valley, we’ve also got folks in Cambridge, Massachusetts. The other half of our folks are in three labs in other parts of the world, Cambridge, England; Beijing, China; Bangalore, India. On top of that, we have dozens of collaborative institutes and labs that we’ve set up with academia around the world, and on top of that we have literally hundreds of individual collaborations between our researchers and academic researchers and universities spanning the globe.

We think this is very important. We want to be a very open, collaborative research organization because we don’t think we can solve all the problems of computer science ourselves, and a rising tide really raises all ships. We want the field as a whole to advance.

Our mission in Microsoft Research is three-fold. No. 1, across the whole breadth of the field, about 55 or so different areas that we work in, advance the state-of-the-art from low level things like operating systems, and networking, and databases, to high level things such as defining new types of user experiences, some of which I’m going to share with you today from the work we’re doing around Office. So to have a steady drumbeat of progress over the whole set of them.

The second part is to get those advances in Microsoft products as fast as we possibly can. And we’ve been pretty successful with that. At this point, pretty much all Microsoft products have been touched by Microsoft Research technology. In fact, almost all of them are built using development tools that we’ve created in Microsoft Research to help make sure that we can improve the overall quality of our product, improve performance, and on top of that there are just individually hundreds of technologies that have gone from Microsoft Research labs into Microsoft products. We mostly work on the technology level. We don’t have sort of a SQL Server research group, or a Dynamics research group. We have operating systems, and networking, and databases, and lots of things in-between, and one of the advantages of that is we can take a technology and we can actually feed it into multiple different products and places where it might help.

And really the third part of our mission is to make sure that Microsoft has a future, and in one sense that’s about making sure that those product lines that you guys are buying into, SQL Server or Dynamics, SharePoint, Office, that those products will continue to have a long life, that they never become technological dead-ends. But really the other part is understanding that if you ask Ray Ozzie, or Craig Mundie, or Steve Ballmer, what are the most important technologies going to be three years from now, they couldn’t tell you. They’ve got some ideas, you guys have some ideas, I’ve got some ideas, and we’re going to be right about some of those things and wrong about some of them. But by having a research organization that’s looking farther out, and exploring for where the state-of-the-art is moving toward, three years from now we turn that corner and go, oh, my goodness, suddenly this technology is really important to our customers and partners. Minimally, we’ll have expertise in house, and hopefully if we do our job right, we will have some technology that we can bring to market very quickly, and get into our customers and partners hands. So time after time in the 17 years that we’ve had Microsoft Research since it was founded in 1991, this has really proven out to be a good bet for us.

So coming back to this notion of exploration, by the way, this is Walt Schirra, he was the commander of Apollo VII. As you can probably tell by now, four slides in, I’m kind of a NASA junkie, and you’re going to see a bunch of interesting, just stunning mass of photography here. I’m also an amateur photographer, which is one of the reasons I like the photography so much. But as a whole, I’m a NASA junkie because I think in a lot of ways they really embody the spirit of modern exploration. In fact, even if you look at things like the names of the space shuttles, Discovery, Endeavor, Challenger, even Enterprise, these are folks who are thinking about how you can explore, how we can push boundaries back, how do we push beyond what humans are capable of doing right now and push forward. I think if you ask some of history’s most famous explorers – Columbus, Magellan, Lewis and Clark and Sacagawea, Charles Darwin, Jacques Cousteau, Apollo crew – I think you’d hear some very common themes about why they wanted to explore from simple things like just straightforward thirst for knowledge to a desire to push back boundaries, including the boundaries of human knowledge and human understanding. I’m sure there’s a bit of restlessness in there as well, they need to get moving. They have a drive. But also a desire to create something that’s greater than themselves, right, that’s sustainable, that’s lasting, that’s a legacy to humanity.

So I think about that, and think about how that is also affected kind of the way we explore, and how we sort of approach exploration as the human race. If we just sort of pick apart the act of exploring, and the first part is, of course, getting there, how do we reach those new frontiers. And, in fact, technology has always played a role in making this happen. If you look back sort of to the Golden Age of seafaring exploration, there was still thousands of years of human history and accumulated knowledge about how to build better ships, stronger ships, ones that would hold up in large storms on the ocean.

Look at something like the Apollo Lunar Lander, which is a marvelous technology in the 1960s, and actually today my watch has more computing power than the Apollo lander had. That’s how far and how fast technology has really advanced. The bottom left corner of this slide is the code prototype of the next generation of lunar exploration vehicle that they’re preparing, and they’re testing out in the desert right now for future lunar exploration missions. I put the Boeing 787 there as well, with the technology and materials as well, the 787 is built with carbon fiber for its fuselage now, so a lot of change there.

The tools that we use to actually do our exploration have also changed as well from telescopes and binoculars and simple navigation devices like a compass to something a little more complicated like a sextant. And the sextant has an interesting story in and of itself. The sextant is a tool for measuring the angle of a star above the horizon. And if you use it for something like the North Star, then that can tell you your latitude when you’re out in the middle of the ocean, a good thing to know, and super, super important.

And ocean navigators back in, say, Columbus’ time used that because they knew that the trade winds and ocean currents at particular latitudes tended to go straight east or straight west. So if they wanted to go to Hawaii from California, for example, they’d head down the South American coast until they got to the latitude of Hawaii, and then head straight west from there until they hit Hawaii. And they had to do it that way, because the one thing a sextant doesn’t tell you is your longitude. It can tell you latitude. You know where you are north/south, but you have no idea once you get in the middle of the ocean where you are east/west back in the time of Columbus.

In fact, what they did back then, the best thing they could do to try to figure this out, is take a clock with them, and then try to sort of see how much earlier or later the sun was rising or setting of when noon was and the sun was overhead, try to get an idea of how far they’d come. So in Columbus’ time, clocks were horribly inaccurate. So they could, by one estimate, be off as much as 175 miles a day if they’re trying to judge their longitude using a clock of that contemporary time. And, in fact, there are some people who suggest this is one of the reasons why Columbus thought he was in India, as opposed to America, when he made his famous trip west.

So within that sort of context you look at something like GPS, global positioning services, and it’s a huge transformational technology, because suddenly we can actually know where we are in an exact precision location on the planet at any given time, a super, super-transformative piece of technology. Now, let’s sort of keep looking at, and picking apart exploration. What do explorers do once they get there to wherever they want to be? Well, they collect lots of data, they take samples, they document everything that they find there. And they bring it all back.

In fact, one of the best examples of this is Charles Darwin, in 1830s he went on HMS Beagle and circumnavigated the globe, stopping at sorts of different places, taking botanical and geological samples, he had a pile of notebooks and he filled them with notes about everything that he found in these different places. You hear a lot about what he did at the Galapagos Islands, but this is literally a trip all around the world. He came back with just a huge store of information he brought back with to the U.K., notebooks, and zoological and botanical catalogues.

In fact, all those notebooks and catalogues still exist in their original form, and are spread through a couple of different institutions in the UK. In fact, his zoological catalogues are at the University of Cambridge. And because we happen to have a lab right down the street from University of Cambridge in the U.K., we’ve been actually working with them on going back and digitizing Darwin’s original zoological catalogues from his trip on the Beagle. In fact, here are some snapshots of a few of those pages. And, in fact, in many cases what he did in the botanical catalogues is he actually took real plant samples and pressed them onto the pages, and you can see sort of his handwritten notes around these.

I’m guessing the barcodes probably weren’t Darwin’s original. I’m guessing they came in later, but it’s just fascinating to look at these. I’ll just add one more interesting tidbit here, which is that a number of the plant samples that he brought back from his trip are actually still alive, and still being cultivated by the botanical gardens at the University of Cambridge. If you ever get there I’d encourage you to go check it out.

Now how we go about collecting all this data when you go to exploration today has changed a lot. In fact, one of the biggest changes has really been the broad adoption of cameras, in particular digital cameras. It’s interesting to note that cameras actually existed in the time of Charles Darwin, and Lewis and Clark and Sacagawea on their famous trip, as well, but they were really unwieldy, right. They were big, boxy, they broke easily, the film was incredibly difficult to work with. Imagine what it would have been like for Darwin to a take huge set of film with him to document things when he went on his trip in the 1830s.

But you can also imagine how amazing it would have been if they had a modern digital camera that they could have taken with them, we could have seen from Lewis and Clark’s eye what their trip west really looked like. It’s amazing to think about what that would have looked like. But now these sorts of things are very routine, and they’re transformational, again, in how we can really capture this live data about what the world really looks like, as well as what space looks like. We send cameras out into space, as well.

By using computers, and computing technology, and in particular using computer vision technology we can actually take all this great photographic data and do an enormous amount of processing. We can clean it up, we can stitch lots of pictures together to get a larger image of what’s going on. In fact, I want to show you a couple of demos of technologies that we have been using, using some computer vision research that we’ve been doing in Microsoft Research to see how we actually look at vision technology.

The first demo I want to show you here, this is actually an image, many of you may recognize this from Yosemite, this is the famous Half Dome in Yosemite at Glacier Point. This is a 17-gigapixel image. It’s literally thousands of individual images taken with a rig, we mount a camera in a mechanized rig that can adjust the direction and the zoom of the camera, and take literally thousands of individual pictures. And we’ll feed them in a computer system that no manual stitching of these together. It actually will take them and load them all together, find the overlaps, and build a larger image of it.

The cool thing about this is we actually created in Microsoft Research not only the rig to take those pictures, but a new image file format to be able to capture that, so we can look far away at it, but we can also zoom in. Do you see those folks at the top of the Half Dome? Let’s go take a look.

Now, it actually doesn’t have the whole picture. There’s some of them right there. In fact, if we go down here there’s one guy hanging out right down here. It looks like he has a hockey stick, but I think that’s actually like a little hole or something. And there’s some guy even farther down. One of the cool things about this file format is that we don’t have to load the entire picture into megabytes – literally gigabytes of memory to view the thing. We can zoom and load in just the parts that they want, even over a network, and that’s super, super fast. In fact, we’re doing it over a network now, this is off on an Internet site. I hope these guys are watching their next step, because it really is quite a doozey, 17 gigapixels, all right.

Now imagine if we were to actually do this, say, once every few months, or even once a month, go do this, and we did that for, say, 10 years, 25 years, and we could see over time what’s happening with the forest, what’s happening with erosion, what’s happening with the climate and overall changes in Yosemite, right. Imagine what an amazing resource that would be for us to understand what’s happening with this pristine ecological resource that we have in our country. So there’s one example of something where we can take vision technology and really capture some amazing data and information, imagery about what’s going on in an important ecological resource.

The second one I want to show you, and this is sort of an example of how we can take research technology and apply them not only for sort of capturing data out in the field, but use them maybe to serve in our office context, as well. We do a lot of cleanup of data, and sort of matching brightness and contrast when doing something like building the Yosemite thing. But, we can actually even just use a lot of that same technology for capturing something like a white board, right.

I don’t know how many of you have – I’ve certainly been in many cases where I’ve been in a meeting, at the end of the meeting I have lots great detail stuff up on the white board, and somebody has to spend the next half an hour copying it all down, or somebody grabs a camera phone and tries to take pictures of it, but then you can never really read what’s on the camera phone.

So we’ve been actually working on how we can take something like a Web cam, this is a pretty generic USB Web cam, it just plugs in the computer over here. And it’s pointed at the white board over here, and you can see sort of about the picture you would expect to see up on the screen. But we can actually clean it up significantly, separate out the background from foreground writing on this, and we get a much cleaner picture.

In fact, not only do we have a great static picture of this, this is something you could imagine feeding into, say, Live Meeting, so that somebody you’re remotely meeting with can see what you’re doing on the white board. So I can erase something and you’ll see it drop off there. You don’t see me in that, it separates out the background, and it knows if you see someone move around in front that that has nothing to do with it. I can change it to that, and you’ll see that up here in a second.

So we can really get actually a pretty deep, rich understanding of the structure of something like a white board, and the data that we put up there, so that we can capture that, rally clean it up well, send it to other places, because we understand the structure I don’t have to send – if I were doing this over Live Meeting, I wouldn’t have to send every single whole picture frame across. I could just send the changes, and I can send the structure. So it’s a rich kind of application, but it really shows how far we’ve come with computer vision to be able to do things like this.

OK. So we can capture data, we can process it, we can make it nice and rich, so let’s – it’s not just about sort of where we can take these cameras out there and take lots of pictures ourselves, it’s also equally important, if not more important, where we can send cameras and data collection across – in places where we can’t go ourselves. This is also a fairly time-honored tradition. Starting with things like weather balloons, we send them off to the top of the atmosphere to get readings that help us do weather forecasting.

Voyager, Voyager has been out there – it was launched in 1977, it’s been out there for 32 years, and it’s right now leaving our solar system. But Voyager I and Voyager II are both going strong, still sending back lots of data from their trips further out. And we’re learning about sort of what happens in heliosphere right at the edge of our solar system.

We’ve got Hubble and Chandra up in orbit taking lots of amazing photographic and infrared data. We’ve got underwater robotic submarines, we can explore the depths of the ocean. In fact, another amazing example is the Mars exploration rovers. They’ve been out there on Mars going for five years now. It was originally supposed to be a 90-day mission, and they’re still going after five – OK. Spirit has one wheel that doesn’t turn anymore, so they kind of drive it around backwards. But you know what, five years, through five Martian winters, and wind storms and sand storms, and the thing is still going. I mean, both of them, in fact, are still going. They sent back over 250,000 pictures. They sent back soil sample data. They’ve discovered that there was once liquid water on the face of Mars. It’s just amazing that we can send these things off to another planet and get that kind of data back.

But it’s not just about where we can’t go, sometimes it’s about where we can’t stay. So we’ve been working with a set of oceanographers, Microsoft Research has been working with a set of oceanographers to look at putting a center array at the bottom of the ocean right off the coast of Washington, Oregon, and Northern California, in a project we call Trident, out on what they call the Juan de Fuca Plate, so we can understand what’s happening with plate tectonics, how that’s affecting temperature, and salinity, and ocean currents, down at the bottom of the ocean there, learning a lot about oceanography, about plate tectonics from doing this.

One of the things that we are doing as part of this is looking at what are the tools, the desktop tools that scientists need to be able to take all this data and process it, and analyze it, and visualize it. It’s sort of, if you will, what does Office for scientists look like. What is that essential set of desktop tools that a scientist needs. And what we learn from that process helps us come back to groups like Dynamics, and Office, and SharePoint, and Excel, and think about what are the next generation of data processing tools that we need to give to a broader set of people to help them deal with data, because you’re all collecting huge amounts of data, as well.

Another good example is something we’re doing called the Swiss Experiment. And this is in collaboration with the two big research universities in Switzerland, EPFL Lausanne, and ETH Zurich, as well as a host of other organizations that are taking part in this. And we’ve gone up into the Alps, right near Davos, and we’ve taken some of our Microsoft Research sensor network technology, wireless sensor network technology, and we’ve planted a bunch of sensors out there in one particular area of the Alps to help us collect temperature and humidity information, so that we can start to map what’s happening with the climate up in the Alps, because it turns out to be a very fragile ecology there, as well, but we don’t have a lot of data, particularly longitudinally, about what’s happening with the climate up there. So we’re working with this organization to help get a much better, deeper understanding of this fragile and very rapidly changing ecology up in the Alps.

So what do we get out of this? We get a lot of data. We get data that we can bring back, build models out of this, and then from the comfort and safety of our own desktops we can be armchair explorers, if you will. We can actually sit down and explore these data sets, both from the point of view of science, as well as work, living, capturing cultural history, if you will, anything that sort of captures the essence and important salient elements of our world, and our universe. We can use the strength of sort of the desktop, and desktop tools, and visualization to really help us understand what’s going on out there.

I want to show you a few demos to sort of give you an idea of what our capabilities are around that. The first I want to share with you, and this is really a little bit more in a work context, is something we call Mavis. This has to do with speech recognition. One of the things we’ve seen with speech recognition over time is we continue to make significant progress. We’d love it to get to the point where you can have a conversation ad it will transcribe the entire thing. Not quite good enough for that, one of the big advances we made in the last few years though is much better noise modeling, to get all the background noise out of it, and really isolate the voices themselves better.

So it’s not quite good enough for transcription, but it is good enough for indexing. If you wanted to take audio recordings, and video recordings and index them so that we could actually search for them like you search for anything else in a search engine, we can actually do that now.

So our Beijing lab has been building out some prototypes of doing this. We took a lot of the lectures from the TED series of conferences, and we indexed them all, and we put in an interface system you can look through. So for example I can put in, I’ve been talking about climate, put in climate here. So and we get back a long list of these, and in fact you can see all the different places from our index where it’s highlighted for us the word climate. And at any given point in here I can sort of click on any of these, and it will take me not just to that video, but right to the point in that video, if we can get the audio up on this, where they’re actually talking about that. So let me just jump to another one here. Can we get the audio up on the machine, please. And one of the nice things about this you can see from just sort of reading at the bottom, it’s actually doing a pretty good job on transcription, even though it’s not exactly what we’re trying to do with this. A couple more examples here, just one of my favorites, put in freak out, for example, and it turns out somebody actually said that. And I’ll just pause this for a second, I encourage you all to go up to the TED site where they have all these lectures you can download for free, and see this one from Joshua Klein on the amazing intelligence of crows. It will blow you away.

So one more example up here, just for fun, Bill Gates, and actually he did get mentioned a couple of times. So that one it didn’t quite get right. We’ve still got a little bit of work to do here. There’s one other one. That one it got right. So like I said, it’s not perfect, but we’ve sure made a lot of progress with it.

One of the things we’ve been doing to kind of learn more about this is we got together with an outside partner to do a pilot project, actually that partner we’re working with, close to home for us, it’s the State of Washington, and they have audio recordings of legislature sessions, or committees hearings, of talks that various government officials have given over a long period of time.

They took this basic technology and they went all the way back to the ’80s and indexed a whole bunch of – first, they were in the middle of a large digitization project anyway, but they really needed to figure out a good way to index those. So they worked with us on going and indexing a large amount of their content. So, for example, I can put in something like gasoline prices, and it will come back and show me that there’s a whole bunch of audio records it found of committee hearings, and I get the exact same kind of interface here, where I can jump to specific points in the audio.

One of the interesting things here that I found in looking through this is it’s really interesting to see for these different kinds of sources which committees were actually interested in that. So gas prices you can see ones you’d expect like energy and utilities, but also commerce and labor, institutions, more commerce and labor.

Another good example for this, one close to home for us in Washington State, salmon population, you can see for this one, once again, looking at what groups were actually involved in this, agriculture and ecology, you kind of expect that. We also see labor and economic development committee. It’s an economic issue, as well. So you get these really new, interesting insights as to who cares about what topics when you can actually do this kind of analysis. So there’s one example of the kind of thing.

The second example I want to show you here is a project we’ve been working on with the Government of India. And the Government of India has a very difficult problem, which is that they have a very large number of culturally significant heritage sites, and temples, and just a wide variety of cultural heritage sites. And a lot of them are falling into disrepair, and they have a hard choice, because they want to actually keep these in repair and preserve them, but they have a lot of hungry people in their country, and they’re faced every day with a choice to feed hungry people or try to repair some of these amazing cultural heritage sites. A hard choice. I wouldn’t want to be in their shoes.

One of the projects we’ve been looking at is to see if we can digitally capture some of these important heritage sites. One of them that we’ve used as a pilot project is the Sri Andal Temple in India. So what we can do with this is, we can actually – that’s sort of a little screen at the beginning to give a little bit of introduction, but there’s a whole bunch of different sections to this. In fact, I can look through here, you can see there’s a whole bunch of different sections. This is actually the main what they call the gopuram of the temple. This is the main gopuram, or gateway of the temple. And, in fact, we can stop here, and once again we have an HD view version of this where we can actually do a panorama, look around the town itself. We can even zoom in and start to see a lot of the detail of this gopuram. There’s a whole bunch of different sections.

For example, this is the sort of open gateway garden. We’ve tried to actually put a narrative over this so people can sort of understand all this. But at any point in there, I can stop the narrative, and explore on my own. This is actually a Photosynth here, where we’ve taken a large number of photos from different angles, and I can zoom in on them, and this is Lord Vishnu. A very important part of the story of Sri Andal Temple, Lord Vishnu discovering baby Andal underneath a lotus plant. A critical part of the story of this.

In fact, what I can do here is I can actually explore down. That’s right above the main entrance to this part, and I can zoom in from here to different pictures, and if I move forward, this is actually that lotus plant. It’s still alive, and they keep it within the temple there. So it’s amazing that even though I’ll never visit this particular heritage site in India, I have this great opportunity to start to put together the pieces of the story of this particular temple, and see these different parts.

Another great part of this, one of my favorite parts, is the inner courtyard. And this is a place where – this is another part, this is the marketplace itself, where they have lots of different stores, and in fact you can see sort of a 3D view of this, and at any point of this I can stop and explore on my own. But it’s going to take me to one interesting point, which is a very well-known store, a sweet shop in here, and one of the things we’ve tried to do is to be able to annotate on top of the Photosynth to be able to explore around, but also the annotations. So there’s various places where I can click on a place and bring up deeper information on particular aspects of it.

I’m going to give the courtyard one more shot here, and it’s not working. I think I’m just going to stop it. Anyway, you get the feel for this, it’s very rich, it’s very detailed, it’s a lot of this incredible content that’s in here that we’ve captured and we can explore very deeply at our own leisure. We can share it with other people, too. So this doesn’t have to be a one person thing. This is something you can share with a much larger set of people as well.

So I’ve shown a couple of examples there. The third example I want to show is actually one that I showed an earlier version of last year when I was here, and that’s WorldWide Telescope. And WorldWide Telescope is something that we make available for free to educators and researchers, and it’s a way to take  it started as an astronomy project that we did with academic astronomers, where they had just a huge amount of data but no good tools to really explore and put it together and overlay different kinds of spatial data on top of other kinds of spatial data. And it was a great opportunity for us to learn about how to help SQL Server work better with spatial data, and just learn about all sorts of different thing.

We’re very fortunate this morning to have Curtis Wong as one of the creators of WorldWide Telescope. He’s going to come out right now, and he’s going to share with us where we’ve been continuing to improve with World Wide Telescope, and new directions we’ve taken it in over the last year since I last showed it.

Curtis, thanks for coming.

CURTIS WONG: Thanks, Kevin. (Applause.)

So, what’s happening with WorldWide Telescope? It’s been really an exciting eight months since we’ve launched. I mean, WorldWide Telescope has totally gone global where millions of kids of all ages have started to explore the universe. I mean, they’ve downloaded millions of tours so they can hear directly from astronomers from Harvard and University of Chicago and other places about what’s happening in the universe in the context of the sky.

We’ve had some amazing press on this thing, and it’s really, really humbling.

We’ve also started to win some design awards. We were a finalist for the Edison Innovation Award as the best new product of the year, as well as being selected by the AIGA, the American Institute of Graphic Arts, as a finalist for the outstanding design of the year.

So there are a number of interesting updates to WorldWide Telescope over the past year. I mean, the first one is really about data. I think when you think about WorldWide Telescope, what you’re looking at here in the sky – do we have the sky up? I think we do. Okay, great. So you’re looking at the sky here.

So earlier, Kevin was talking about the Yosemite picture, which was 17 gigapixels. So what you’re looking at here, this image of the digital sky survey is actually 1,000 okay? And something that big allows you to do something like look at the big dipper here, but also like select an object here that’s in the field of view and zoom right into the center of the galaxy right here.

So we can just continue to zoom into this particular object and look at any part of it. Architectural menu shows you views from other telescopes such as the infrared telescope, et cetera. We have much, much more data. When we say we had about a dozen all-sky surveys, now we have more than 50 encompassing the entire electromagnetic spectrum.

We have a number of – a few more images from the Hubble and other space telescopes. We also talked about the Mars Lander. So we have panoramas from Pathfinder, from Spirit, Opportunity so you can look at all of those. Some of them are in 3D. You can go out and zoom out to objects in the distance. It’s pretty amazing. And all of those were here available in the WorldWide Telescope.

So we’ve also enabled much simpler connections to data than before. If you look, I can bring up this thing called the finder scope. And the finder scope allows us to connect to lots of other information sources over the Web. So say I’m a kid doing my homework and I want to learn about this particular galaxy, I can just say, hey, look that up in Wikipedia and tell me what that thing is. And then I can learn about that particular object.

Or maybe I’m an advanced student and I really want to learn more about that from the Smithsonian Astrophysical Database. Well, typically, that’s a hard place to get to, you have to know where it is, you have to know how to structure a query, but we just go get it for you and show you here are the latest technical papers related to that particular object. So this is a seamless connection to lots of information all over the Web.

If I wanted to see an original image of this, I can go and find original Sloan images of this particular galaxy. Here’s an example of that.

So all of these things allow, in a consumer application, we’re thinking about how you can make these kind of complex searches for things really, really simple that these can eventually migrate down to our products.

In looking at something like this particular galaxy, we have the ability to combine lots of different data layers. So as an example right here, we’re looking at the galaxy in the visible light, and I can say I’ll set that as the foreground layer, and then – I’ll set that as a background layer, actually. And I’ll set this image from the Chandra X-ray telescope as the foreground layer.

And so what that allows me to do as I look into this object here is I can compare two different data sets. So in the X-ray, typically things that are in X-ray are things like black holes, super novas, and things like very high-energy sources. So you can start to see where those things are within this particular galaxy. And you can do this with just about any data set, whether they’re all-sky data sets or other things as well.

So the second major feature of WorldWide Telescope that’s particularly exciting that’s new is this whole idea around simulation. So I’m going to pull up a simulation of the solar system. So what you’re looking at here is the view of the solar system from the top down. And what I can do is I can pull up Saturn. So let’s go take a trip out to Saturn. So this is an astrometrically correct view of the solar system. In other words, it shows the location of the planet, where it is right now, the orientation with the earth, the correct place given what time it is.

With Saturn, the orbit of Saturn is about 29 years. And about every 14 and a half years, the rings are about edge-onto us, so it ends up being really thin and they almost disappear. I mean, for something that’s actually only about the size of your house, that’s how thick the rings are, they’re made up of little chunks of ice from about a half inch to about the size of a house, that’s what makes up the rings.

So let’s go take a look at the Earth. And when we get to the Earth, we’re going to be coming in from the back side because we’re coming in from Saturn. And one of the things that’s interesting about the earth here is you’ll be able to see – here we are, we’re on the back side of the earth. We’ll come around and here is – you can see it’s early in the morning, 10 o’clock here in Louisiana. And we can go down and take a look.

KEVIN SCHOFIELD: So this is now?

CURTIS WONG: This is now. Yeah, exactly. And then you can go all the way up. And I’m going to actually pull up a little tour that I’ve constructed and this will show you the most significant solar eclipse in our lifetime. A lot of times, these eclipses are in like Siberia or Iran or out in the middle of the ocean and they’re in the winter and the odds of seeing them are pretty low. But in August of 2017, you’re going to see an eclipse that is the most significant eclipse of our lifetime.

So we have a view of the earth here, and this is not – this is actually a real view, so there’s a shadow of the moon here crossing from Oregon over Wyoming, and it’s going to go all the way down – pass down to say South Carolina.

KEVIN SCHOFIELD: So this isn’t a video?

CURTIS WONG: It’s not a video.

KEVIN SCHOFIELD: You’re actually really running WorldWide Telescope.

CURTIS WONG: We’re running WorldWide Telescope.

KEVIN SCHOFIELD: Simulating what’s going to happen.

CURTIS WONG: Exactly. And it’s creating that shadow because the moon is here and the sun is out there. So we switched to Lexington, South Carolina, and look up at the sky. And of course we’ve accelerated time a little bit here. But this is what the eclipse will look like from Lexington at 11:43 a.m., OK?

So we’re going to flip back out to the view from space. There’s the shadow of the moon. So we’re going to pull back away from the earth to show you that alignment of the moon and the earth. So 250,000 miles, there goes the moon. So we’re going to go even faster now, faster than the speed of light. So we’ll see the sun, 93 million miles and there goes the sun, OK? We’re going to go hundreds of times faster than the speed of light, just like in Star Wars.

And we’re going to go out to the edge of the Milky Way in a matter of seconds. Now, notice the constellations are all changing, okay?

KEVIN SCHOFIELD: Is that the death star? (Laughter.)

CURTIS WONG: So we’re going out. So now we are in the realm. Those are galaxies, we’re hundreds of millions of light years away from the earth, and we’re seeing the large-scale structure of the universe. You see clustering of galaxies and sort of voids where there aren’t any galaxies at all. In fact, I can pause the tour right here and show you this is done in animation. I can actually go and explore interactively with this 3D model of the universe, and anybody, any kid, kids out there can do this.

So let’s go pick up the tour some more. So where did that data come from for this model of the universe that we’re showing you here? So that started from this image, this mosaic of the Sloan Digital Sky Survey, which you’re seeing right now.

So I’m going to pause the tour here and actually put this away. So this is the Sloan Digital Sky Survey, which is sort of a football-shaped mosaic. It starts at about _ just above the Milky Way, and ends down below the constellation of Leo. And in the Sloan, what you’ll be able to see is I am going to bring up – this is sort of the third new thing, which is the ability to do a little bit of plotting and visualization of data.

Well, I’m going to open up a table which is a table of 700,000 galaxies in the Sloan. So this takes a little data, as you can imagine, having that many rows in a table. Okay, here’s our table. So what we can do here is I can select any row in that table and I’ll bring up a plot in terms of where that object is. I’m going to actually bring up the Sloan behind it so you can see that. OK. Here’s another one here.

And what I can do is I can plot all 700,000 of these galaxies on the sky. Now, it doesn’t look like we’re seeing too many right here, but there are actually quite a number of them. And if I pull out more, you can start to see the actual footprint of the Sloan galaxies, and that actually goes much higher here.

OK, so let me pull that back up. All right. So I’m going to unplot this. So it’s going to take me to some interesting trends that are happening in technology and in astronomy in particular. So there’s starting to be a real explosion in data. There are some telescopes coming online in the next few years such as the Large Synoptic Survey telescope that is going to be collecting 10 to 20 terabytes of information every single night. And when you have that much data, it becomes a real challenge to be able to process all that stuff, to look for interesting things that are happening.

And there’s some new developments in terms of thinking about how you make that data public so that the citizens can get involved and participate in science. So there’s a new Web site that started a couple of years ago called Galaxy Zoo. And this was developed by the folks that also did the Sloan Digital Sky Survey to get the public involved in looking at galaxies to sort of categorize them, to be able to say, oh, is this a spiral galaxy, is this an elliptical galaxy or not.

Last summer, there was a woman who’s a school teacher in Holland named Henne and she was looking at the Sloan and she came across this little object here. So this galaxy here has this funny little blue thing here. What is that? She didn’t know what it was. So she sent an e-mail to other folks in the Galaxy Zoo. “Anybody know what this thing is?” And nobody did. And so the astronomers got involved, and very soon they realized that this is a new kind of object. So they retargeted a number of telescopes, such as the very large array radio telescope, and now the Hubble telescope is going to be looking at this object too.

So this is a really interesting example of how making the data – democratizing the data, making it available to the public is changing how science is going to be happening. And if you think about business intelligence, what are the kinds of tools that we’re going to want to be able to create to allow everybody to be able to look at the data for an organization. So with all those eyes looking in it, somebody will be able to spot trends and things that you’ll want to be able to know, to anticipate what’s going on.

So what’s the future for WorldWide Telescope? So one of the things that we’re doing is you saw some of the work that Kevin talked about earlier related to some of the sensor data. So we’re starting to take this technology and point it back at the earth, to visualize interesting events from an environmental things, climatology, hydrology, other things to help us understand what’s going on, the carbon footprint in the atmosphere.

We’re also looking at how we can use WorldWide Telescope for business intelligence. You know, how we can do geospatial mapping of information, even time-based information about where my data is. In other words, where are all my customers coming from if you mapped out and overlayed that on top of what’s the footprint of where I’m doing all my direct mail advertising and how are my most profitable customers overlapping with that data set.

I mean, that’s just a simple example, but what I’m interested is I’m interested — after this talk, if you’ll come talk to me, I’d love to learn about your scenarios. What are the kinds of geospatial visualizations that can help you business? Because I want to learn about those and think about those if we think about adapting this technology.

And as a little bit of an incentive — where’s my little telescope here? Oh, I’m bribing you with this little kid’s telescope here. And we also have stickers.


CURTIS WONG: Yeah. OK. So thank you for your time and I’ll be down there after the talk.

KEVIN SCHOFIELD: Thanks, Curtis. (Applause.)

CURTIS WONG: Thanks, Kevin. (Applause.)

KEVIN SCHOFIELD: I think one of the most important parts to take away from WorldWide Telescope is this is not just about exploring space. You know, for us, we’re in this because we learn so much about spatial data and bringing different data sets together and not only the benefit of Microsoft Research, but for the SQL Server team as well, and in SQL Server 2008, there’s the whole set of spatial data functionality that literally came out of this work that we did with WorldWide Telescope. So we’re super happy about that.

So we talked about gathering data, we talked about modeling data and being able — sort of the amateur explorers and take you back out to get data sets out and explore them from our own desktops.

Where do we go from there? Well, in fact, once we build these very rich virtual models of our world, we can turn around and take those models back out into the world with us and do what’s called grabbing the wrong remote – that’s called grabbing the wrong remote. It’s called augmenting the real world. They’re augmenting reality, right? Where we can take these rich sets that give us more data about what’s going on in our world and use that to enhance our experience of the world.

Here’s one simple example: Somebody holding a device, and they’re pointing it at the street. And they can see a restaurant and they can see a store with a sale, see an apartment for sale, you know, once we can recognize sort of visual imagery and recognize where we are in the world, we can annotate our world and enhance our experiences by doing this because we have these rich models and we can sort of seamlessly blend. And that’s a really important part. We don’t want to sort of have these – we want to find a way to seamlessly blend together physical stuff and virtual stuff.

I want to show you one more demo and a couple of videos that we’re doing both at a low level and a higher level on how we blend together physical and virtual.

First example I want to show you is something that’s called Microsoft Tag. Many of you may be familiar with what are called quick response codes. You see these little 2D black and white bar codes that get stuck on billboards, magazine ads and things like that. And you can take your phone — I’ll unlock my phone here. You can take your phone and snap a picture of one of these and hopefully it will sort of be able to read it and take you off to a Web site where you can get more information about this thing.

It’s a new kind of technology. There are some examples of it out there already. They tend to be pretty unreliable for a number of examples. We tend to sort of have a hard time holding our phones still to take pictures from a camera phone. Camera phones generally are set up to focus at a distance, and they don’t focus close up very well. So any picture you take close up tends to be kind of unfocused. Lighting may be bad, there’s lots of things that go wrong.

So we’ve been working really hard on a couple things. One is a new format for tags, for this kind of 2D color bar code where we can use computer vision technology to do a really reliable, you know, resilient job of reading them, but also how we build sort of the back-end service behind that. In fact, we’ve launched this. If you go up to, you can create your own tags and try them out yourself. You can download the phone application to run it yourself.

I’m going to show you a couple examples here on my phone. So I just launched the tag application. And so for example, a piece of marketing collateral like this, and it’s got a tag on it. And I’m running a tag reader application. Let’s just crank up the smart phone, and all I have to do — I’ll do this close up. You can see it’s not even focused very well. But as soon as it gets within the red box, it’ll actually immediately read it very quickly, and then it’s launching Internet Explorer on my phone, and it’s going to take me off to the Office site, and there we are at the Office site.

We’ve actually at Microsoft been trying to roll out tags on all of our boxes so that somebody who’s going to a retail store, for example, who wants more information without even leaving the retail store can – I’ll just pull up the tag reader again. This is actually the Product Red version of Windows Vista, which is part of the global fund to fight AIDS, malaria, and tuberculosis. And we’re proud to be a supporter of that.

And if you want more information about Product Red, I can go ahead and snap that bar code. And you can see if it was moving around a lot and it wasn’t even close to focused, and we still managed to snap it and it takes us to the right place.

We’ve really worked hard to make this resilient. Here’s a bar code I printed out for MSNBC. You know, what’s a keynote without a little bit of theater here. (Laughter.) Ah, let’s really get that good and crumpled up, that’s good, all right. And we’ll launch the reader. I’m living dangerously this morning here. Launch the reader again, and we come up here and it reads it and sends us off to – it’s trying really hard. Cell phones inside of conference halls. There we go, it took us to MSNBC. Yay.

And one last thing I want to show you here. One last example, here’s a business card. We can actually go up and you can – if you go up to the tag site, you can create a tag for your own contact information. And so here’s a business card with a tag for my colleague Aaron Hoff who, in fact, works on the tag team – the tag team, ha ha. (Laughter.) And you see just by scanning that, it went up and got his contact information, it’s asking me right here if I want to add his contact information into Outlook from my phone.

So a really great way instead of having to take this information and type it all in, I can just scan it and immediately I’ve got his contact information. So we’re super excited about that. (Applause.)

One other thing I want to mention about Tag is Wal-Mart has actually been an early adopter of this. If you go into Wal-Mart stores, they’re using it for in-store promotions. So somebody walking around with a phone inside a store can go ahead and snap a tag on product information, signage within their store and get information. So, you know, we’re really super excited. It started as a Microsoft Research technology, but we’re productizing it awfully fast now and we’re just super excited about where it’s going.

OK, so I’ve got a couple videos that I want to share with you here. And the first three videos here really have to do with Microsoft Surface, which actually started as a Microsoft Research technology. I hope a bunch of you got a chance to play with it in the Microsoft expo in the pavilion on the expo floor.

One of the things we’re trying to do with it is actually make a really thin version of Surface. So here we took literally an LCD screen, ripped the back off, and put behind it a set of infrared lights and infrared sensors, a whole array of them. So we’re bouncing infrared light up through the screen. You can do that through an LCD screen, and capturing the reflections back. You can see hands, cell phones. You know, if you put a remote control in there, we can see that too. We can actually use – there are certain kinds of materials that reflect infrared light very, very well. So besides just sort of ordinary things, you can take something like – it’s got a knob down here, like knobs like you’d find on your stovetop or something. And we can put a bar code underneath it and be able to sort of — here it goes right now, and you can see there the ID code that we put on the bottom, so you can actually tell what it is and its orientation.

We can catch reflections of hands and we can do all the same gesture things that we can do with the standard Surface that’s really thick, but this one is just super, super thin.

We can also use objects to interact directly with it as well. So he’s going to show you in a second, for example, a paint application where we’ve got a little paint palette that we can move around. You can see this object, you can recognize what it is, and you can take, for example, a paintbrush and use the paintbrush to literally paint. Choose a different kind of paint. You know, once again, all sorts of things we can do on a traditional Surface.

Part of this exercise, research exercise, was just trying to figure out how much of this we could really do on this new kind of form factor for Surface where we really wanted to be able not just to sort of do multi-touch with your hands on the surface, but actually interact with physical objects.

Once again, this is about mixing physical and virtual objects. In fact, you can make an interactive physical object as well. Here’s a case where we’ve got a couple of LED lights and an infrared sensor and we can actually send infrared signals up to this little device that we’ve dropped.

So we can have interactive electronic physical devices that we’re interacting with. In fact, we can receive infrared signals behind it as well. So you can take any old infrared remote control and point it at this thing and we can see the infrared signals that are coming out of it. So it works both ways, very cool.

The second one I want to show you here, something called Lucid Touch. And this deals with what’s called the “fat finger” problem of touch computing. Which is when you try to put your fingers on the thing to interact with it, you tend to obscure the thing you’re trying to actually select or interact with. So we put all the touch sensors not only in the front of it, but actually in the back of the device as well.

So if you’re just holding it there and you’ve got your fingers on the back, they’re in exactly the right position to do all the same kind of touch computing from the back. And, in fact, we can show you, you can see the little red and blue dots. We can show you exactly where you would be selecting on the map, for example, where you can do sort of stretch things out and zoom in, pan back and forth, all the same kinds of interactions you can do. And it’s very, very natural. We can even, as you can see, put sort of semi-transparent silhouettes of your fingers so you really know where you are when you’re holding them.

And the third one I want to show you, this is really the coolest of all, is something called second life. And we took a traditional Surface table and we made three changes to it. The first one is we replaced what they call the diffuser, the kind of back projection surface on the top with something which is really basically one big LCD pixel that we can blink on and off 60 times a second. At 30 times a second it’s opaque and it’s a back projection surface. 30 times a second, it’s transparent.

Second thing we did was we put a second projector underneath inside the table. So now we can set up two different images. And we synchronized it with this new diffuser on top so 30 times a second, get back projected onto the top, and the other 30 frames a second goes through and can be caught by something on top, for example, this magic lens. So I can have a picture of the night sky, and I can have something that shows me annotations of that picture. And it is just a piece of smoked glass, there’s nothing magic about the magic lens at all other than the name, right?

Another example of is a 3D skeleton of a car that we can just move around. You can do this with basically a piece of wax paper if you wanted.

The third change we made was we took that infrared technology I showed you a couple minutes ago and we stuck that in here as well. Oh, by the way, you can use something like a cylinder of smoked glass as a prism and project something like, you know, hello world around the outside and it just circles around.

By using infrared, we can actually see what’s sitting above the table, not just on the table. We can catch reflections for that as well. And you can hold objects on top. In fact, this is a case where those two little strips are infrared reflecting material. So we can see this image and in fact, we can see what orientation it’s in.

Slightly more sexy version of this where – because you can tell the orientation of this thing, we can adjust that second image that we’re throwing up there so it never looks distorted if we’re looking at it from funny angles.

So now imagine having a 3D MRI. You’re a doctor, you’ve got a 3D MRI and you stick this thing in there, and you can look at any arbitrary slice that you want of that 3D MRI because we can in fact track this thing, and we can in fact do touch computing on the little thing that you’re holding out there because we can see your fingers out there as well. So pretty amazing stuff.

So the Surface table today is amazing, and we’re not even close to being done with it. We’ve got so much more we want to do to be able to extend it out above the surface of the table itself, but really go even further to blend physical and virtual objects together.

And the last video I want to show you here is really looking at sort of a great example of modeling the world and using it for augmenting reality.

Michael Cohen is one of our graphics and vision researchers here. This is at an internal trade show we did a couple weeks back called TechFest, you may have heard of it. Where on the show floor, he actually went around with a camera and caught a bunch of imagery of sort of down the hall and in his booth and around the corner. And he set up a little treasure hunt where he went into this world and with arrows and some other annotations, he sort of set a path to follow through the virtual world that he captured with photography and put a sort of a treasure box at the end.

So now he can pick up his laptop, it’s got a camera in it, and the bubbles are kind of flowing along and lead him in the direction that it wants to go. So they’re moving forward, so he’s moving forward. They get down to the end of this hallway here, and they start floating off to the right a little bit. So he turns right to follow the bubbles and they kind of straighten out for him. But he overcorrects a little bit, they start all going kind of left. And he goes, oh, I need to kind of head to the left a little bit. And as he closes in on the other location, some of the treasure box appears and he can go over to the box there and pick it up and there’s a little bit of candy there.

So we can really actually today capture our real world and annotate it and create these kinds of augmented reality where you take all that extra modeling information that we have and put all this additional information that enhances the way that we get around in the world. So amazing, amazing stuff that’s coming down the line here. We’re really excited about this kind of augmented reality where we can take all these great models and change the way we interact in the world.

So let me bring this close to home because, you know, this is kind of ethereal a little bit. And we talked about exploration, we talked about science, a lot about science. Talked a little bit about how this applies to how we live and work.

I actually look at this and say this has a lot of application to how we live and work. And, you know, we really need to think harder about that. In fact, within Microsoft, the Microsoft Business Division, Dynamics is part of that, has really been thinking about what the next generation of business productivity looks like. Not just Office productivity, but business productivity as a whole.

And in fact, they asked us at Microsoft Research to work with them to put together some scenarios about what this would look like ten years from now, in the year 2019. And we worked with them and put together a video, just a few minutes long, really sort of exploring what those scenarios would look like. So I’m going to share that video with you today. Let’s go ahead and play that video.

(Video segment.) (Applause.)

KEVIN SCHOFIELD: So the video always gets a few chuckles, I understand why. You look at this and step back and go, “That’s a lovely piece of science fiction.” But, actually, it isn’t. This is a point I want to spend a few minutes on here.

We worked really hard I putting these scenarios together to make sure we only talked about technologies that we actually believed in, right? Mostly because we could already point to prototypes where they already worked, right? Very, very careful in putting this together. Let me just talk about a couple of those technologies, things that looked so more fanciful.

One example is the newspaper, right, the updating electronic newspaper that’s flexible and bendable and just like an old newspaper. In fact, there are prototypes of that that already exist, what they call electronic ink technology where it’s very low power and you can actually write it out onto a surface and then actually not have to expend any more power on it until you need to re-write it in something else. There are already great prototypes of that.

The Arizona State University Flexible Display Center, sorry, in fact you go up to their Web site and look at some amazing stuff that they have prototypes now of real, color, flexible, bendable, foldable display. Right? There are prototypes that actually exist of doing this stuff.

We have small credit-card-sized, very thin-touch interfaces. I showed you Lucid Touch before. The researcher who’s working on that is now working on this. Is working on how we make very small, very thin touch surfaces both on the front and the back. We have prototypes of this working as well.

There’s a huge amount of cloud computing that will need to go into this because we’re not going to carry all that data around with us. It’s going to live out in the cloud for us. We’re doing a lot of work around this as well. In fact, one of the things we’re doing here is we’re looking at how we can take something like the Intel atom processor, which has about a third of the computing horsepower of a standard CPU that we would use in a data center computer, but it’s about a tenth of the cost and uses about a quarter of the electricity and generates far less heat.

And think, about, OK, maybe we have more processors in the data center, but we can save a lot of money and save a lot of energy by switching to sort of refactoring the way we build data centers. So a lot of work that we’re putting into that as well.

There’s a whole set of other technologies, but they’re all ones that prototypes exist for. You know, I’ve showed you some speech recognition. I’ve showed you vision technology today, along with 3D mapping. Telescoping tile displays. The ability to tile displays so you can cover walls with them.

You know, we had a white board out here. There are people now saying that within the next three to four years, the cost of covering a wall with LCD displays is going to be less than the cost of covering it with white board surface, right? That’s an incredibly important inflection point when suddenly the interactive display is cheaper than the straightforward one, right?

Gesture recognition, we showed a lot of things around that. Wireless location-based services. So you heard earlier this week about the dynamic business. You know, the way I think about this is that this is about re-imaging business productivity or the next generation of the dynamic business and all the same principles apply. You know, we want to enable people to be productive with software and technology that’s familiar, that’s simple, desirable that you want to have in your office and in your living room and on your kitchen counter.

It’s about processes and allowing them to be adaptive and flexible, to integrate your process with the other people that you work with. It’s about having a connected ecosystem, connected technology, but also connected people, letting it all be very, very seamless, but you have incredible conversations that you want with people that you want to have them with, wherever you are. Whether you’re at the airport lounge or in your garden, share information, talk about the things that you want to be talking about and have that larger community with your partners, with your customers, with your vendors, with all those other people that you want to be able to work with, to be able to bring all these pieces together.

This is going to get built over the next ten years, right? We have no doubt about that. And we’re going to see little pieces of it emerge over time. It’s not all going to suddenly appear ten years from now one day snap your fingers and it’s all there. We’re going to see these pieces over time. We’re completely convinced that this is going to get built over the next ten years, and we want to be the company that builds it for you.

So you may be thinking Kevin’s a space cadet, you know, not only is he sort of painting wild visions of the future, but he’s also somewhat disconnected from the reality of what’s happening in the economy today. And trust me, as a single parent with two kids heading off to college next fall, I am not disconnected from the reality of what’s happening in the economy out there.

I’m fully aware that these are incredible difficult economic times. And there’s a real, very understandable desire and expectation that we should be hunkering down to kind of wait out the storm. But as Stephen Elop, who’s the President of Microsoft Business Division likes to say, the people in environments like this who hunker down to wait out the storm are often the most — the people who are most at risk for being blown away by it, right?

You know, in difficult times like this, there are also opportunities, right? Opportunities to change the way we work and change the way we think, to change the technologies we use to try get an advantage, right? We are continuing our investment at Microsoft in both research and development because we believe that there’s a huge opportunity. We see this clear vision of where technology is going to take us, the way it’s going to change the way we explore our world and our universe, the way it’s going to change business productivity for every single one of us in the years to come. And once again, we want to be the company that delivers those solutions for you.

So that’s why we’re continuing to sort of bet very heavy on R&D to try to innovate our way out of the difficult times right now. And so we’re ready and well positioned to deliver fantastic technology and solutions to you and we’re all ready for that period to come.

At that I’m going to wrap up. Unfortunately, I think we’re pretty much out of time. We’re not going to have time for any Q&A, but I’m going to hang around a little bit afterwards if any of you have questions for me.

I showed a lot of stuff the last hour. In case you want to follow up, I just put some URLs up here of interesting things you may want to actually check out for yourself. I want to thank you so much for your time this morning, for getting up early and making it here for the session. It’s been a great pleasure for me and I really hope you enjoy the rest of the conference. Thanks. (Applause.)

CHRIS CAREN: Thanks very much. So we’re going to wrap now to stay on schedule. Curtis and Kevin will be here if you have any questions. I think Curtis brought his telescope and the stickers so any good ideas, so feel free to come up and bring them.

Just reiterating what Kevin said, I do want to thank you again for taking the time to come to the event and make it the success it’s been so far. Enjoy the rest of today and tomorrow and thanks again for your commitment to us, and believe me, you have our commitment as Microsoft to you all and enjoy the rest of the conference. Thanks very much.


Related Posts