Joe Belfiore and Scott Guthrie: MIX11 Keynote – Day 2

Remarks by Joe Belfiore, Corporate Vice President, Windows Phone Program Management, and Scott Guthrie, Corporate Vice President, .NET Developer Platform
Las Vegas, Nev.
April 13, 2011

ANNOUNCER: Ladies and gentlemen, please welcome Corporate Vice President, Windows Phone Program Management, Microsoft, Joe Belfiore. (Applause.)

JOE BELFIORE: Good morning. Good morning. Good morning. My name is Joe Belfiore, I run Program Management design for Windows Phone, and I am excited to be here today with all of you.

Actually, let me ask, do we have Brandon Foy in the audience up here somewhere? Brandon Foy? Can you come up here Brandon? That video you all just saw, Brandon made. I want to introduce Brandon. I don’t think Brandon knew we were going to do this; here, turn around. Brandon is 24, is that right? 24, and Brandon just made a fan video on his own, accurate? Posted on YouTube. A bunch of people forwarded it to us and said, “Man, this video is great, you guys should make TV commercials like this video.”

So, we said, “Well, that’s a cool idea.” And we reached out to Brandon and said, “Brandon, would you make a video for us, a commercial of all our developer apps for MIX to show at the conference?” So that video you saw Brandon said yes. Brandon made that video as a passionate member of the community to show a bunch of the work that you all did, and we’re excited to have you here. So, what did you guys think of Brandon’s video? (Cheers, applause.) Cool.

Now, actually, one more thing. Do you think you like it enough that you could help us tell other people about it? Get your Twitter clients ready because what we’d like to do is if we get enough people looking at it, we’d like to actually turn it into a TV commercial, how does that sound? So get your Twitter clients going. Here is the URL, Brandon’s video is on YouTube.com/WindowsPhone. If we can get 200,000 views, then we’re going to put his video on TV. So, we just wanted to start with that as an example of great support from our community, thanks, Brandon, for coming. So long. (Applause.)

It just felt like a good thematic thing to do since the community matters so much to us. Now, before I jump into a bunch of content about Windows Phone future and what we’re going to be doing, I want to spend a little bit of time on our update situation. Many of you probably followed, I did a brief, informal intro video for MIX on the MIX website, and out of lack of preparation, mistakenly said something like, most people have the February update, and I was wrong.

So, I posted a comment on the blog acknowledging the mistake that I made and said when I came to MIX, I would talk a little bit about updates and explain a little bit about what’s going on. So, I want to do that for two or three minutes before I jump into the content.

The first thing that a lot of people have asked about is why are the updates later than we expected? When we did our announcement we said that we’d have the copy-paste update available early in the year. We had expected it to be earlier than now. It’s now rolling out. It’s not available to everyone yet because some mobile operators are still testing it.

I wanted to talk a little bit about why that happened. We had finished our software, handed it off to our team that does the update process, and they actually started deploying to update, but what happened was we found issues with the way the update was getting deployed on phones because of things that we hadn’t anticipated that happened as real phones were manufactured.

And I’ll give you an example of this kind of thing. We’ve done lost of testing on pre-manufactured phones, but as phones were coming off the manufacturing line, some of them had characteristics that we hadn’t seen before. One example was less than 100 phones coming from one of the handset vendors had a manufacturing setting that was left on the phone to indicate that something had gone wrong in flashing the phone in manufacturing. The phones got out in the wild, they worked OK for end users, but when we tried to apply the updates, it switched the phone back into a factory diagnostic mode, so the update didn’t work correctly.

We saw a few things like that. There was no one issue and none of the issues were particularly widespread. But as we started doing the update, we didn’t know whether they were widespread or not, so we stopped and we added extra capability and infrastructure to the way we did update.

We patched our Zune PC client, we improved the way we do throttling to get updates out to people in sort of a gradual way, and that took extra time that we weren’t anticipating at the time of launch.

So, we felt it would be better to be a little bit patient, make sure that when we get updates out that they would happen reliably, and unfortunately, that caused a delay in getting things out.

We’re pretty optimistic that we’ve gone through that learning process and that we’re not going to face those kinds of issues in the future, but we want to get all the way through it.

The second question that people have asked about is who approves updates or who decides when they can ship?

And when I look at a lot of the comments in the blog posts, there’s a lot of assumptions that people are making and frustrations that people seem to be having around the fact that different mobile operators have updates coming to the phones on their networks at different times.

The fact is, from our point of view, mobile operators have a very real and reasonable interest in testing updates and making sure they’re going to work well on their phones and on their network. Especially if you think about large operators with huge networks, they are the retailer who sells the phone, so they have to deal with returns, they take the support calls and they have to worry about whether their network will stay up and perform well for everyone.

For the most part, the large operators think about updates very similar to the way they think about selling a phone at a store. They want to make sure that it’s going to work in a way they understand and that’s going to be predictable. From our point of view, that’s quite reasonable, and our belief and understanding is that it’s standard practice in the industry that phones from all different vendors undergo operator testing before updates are made available.

The thing that’s a little bit different about our process is that we build the update and we deploy it from our Microsoft servers to everyone. And we’ve decided to deploy updates when they’re ready for open-market phones and for the operators that approve them. So, the operators that have a quicker process will require less testing so the updates come out more quickly.

That has sort of the unfortunate effect of if you’re one of the people on an operator that’s taking longer, you might know that other people have the update while you don’t. We made a decision to do that rather than go through all of the testing and wait for an end date and then give it to everybody at once because that would result in some people getting it later.

We’re still going through this and getting reactions from everyone and trying to figure out the best way to do it. We think that these things will go faster in the future, but we also think it’s pretty reasonable to support the operators in their need and desire to be able to support phones and support their customers. So, that’s sort of the answer to the question about who decides when they go.

And then last, have we learned from this? Do we think it’s going to get better? I would definitely say yes. We brought many years of PC experience to the table in terms of infrastructure for doing updates, but the fact is that phones are different in some pretty significant ways. We’ve now, I think, got a handle on. One is that OEMs do a lot more of the core operating system code on phones than they do on PCs so those manufacturing-type issues are different on phones than they would be on PCs.

Similarly, the mobile operators play a role in testing, and we’re still trying to figure out the best processes for doing that.

We have tried to improve the team that does this infrastructure. We’ve added people, we’ve stacked it up, make sure we can do it as quickly as we can. We’ve improved our infrastructure and our testing methodologies, including some of the things I described earlier, and we’re trying to get better at communicating about all of this. And I know it’s been frustrating not to get very clear and specific facts.

One thing we struggled with is each of these things involves us and the OEMs or handset manufacturers and the mobile operators, and it’s hard to coordinate which things we can say about what other people are doing. We’re going to try to keep getting better at it, we’ve added the “where’s my update?” page. We’re going to update it regularly, and we expect that we’re going to get these problems licked and get good at this and have no problems in the future.

So, I just wanted to talk about that, give you a little bit of our perspective on some of the facts, and now I’m going to move on and talk about apps and the future of Windows Phone stuff.

So, before I jump in on the next version, it’s worth a quick look back at what all you have been able to accomplish in terms of applications for Windows Phone. And I want to say a big thank-you to all of you. What was represented on the screen there was the 12,000-plus, nearly 13,000 apps that are now in the Marketplace just six months out of launch. There’s north of 35,000 developers that are registered, one and a half million tool downloads, we’re seeing really good apps, which is something that we care a lot about. We want end users to delight in the quality of these apps.

We’re seeing intriguing apps from individual developers that do really cool and fun things, and we’re seeing great brands. There are some terrific recent brands that have come online like ESPN and the New York Times and Zillow. So, the Marketplace continues to fill out, we’re deeply appreciative of the work that you all have done to help make the Windows Phone app Marketplace a compelling place for end users to get apps.

Now, I want to start looking forward and talking about what’s coming later this year. We have a significant update that’s going to be available to all users of current Windows 7 phones, and will be available on new phones in the fall. Our code name for that is “Mango,” so you’ll hear us say “Mango.” And I want to talk to you about a bunch of stuff that’s coming for developers in “Mango.” We’re not yet talking about the end-user features, these are the things we’re doing for developers.

And I’m going to cover these in three areas. The first is things that we know you’ll care about that you’ve been asking for to increase your opportunity. How can you sell more apps to more people and get more engaged users using your apps in a more satisfying way? So I’m going to talk about some of the things we’re doing to improve your opportunity.

Second, I’m going to talk about new platform capabilities that we’re delivering as part of the “Mango” release that will let you build better apps for end users.

And then last, I’m going to bring Scott out, we’re going to transition over to Scott, and he’s going to talk about improvements that we’re making in the tools and runtimes to show you how you write some of those apps and how much better the developer experience is going to get with this update.

So, that’s what I’m going to cover. Let me start right in and start talking about the opportunity.

Opportunity is really about getting you more customers and improving the number of people that are using your apps and getting access to them. The first thing that I want to talk about here, really, is the scale of users. A lot of you probably have noticed in the press in the last few weeks, separately, IDC and Gartner both issued reports suggesting that by 2015 the Windows Phone ecosystem would be the second largest smartphone ecosystem in the world.

We were pretty excited to see our strategy validated in that way. No doubt, that was helped by the developer support we’re getting and the fast rise of apps in the Marketplace. I think people are looking at us and saying the platform is credible and is growing quickly, and it’s certainly also greatly helped by the announcement of the Nokia deal.

We’re incredibly excited to have Nokia as a partner. I’ve been spending a bunch of time with them, people on our team have been spending time with them. We’re making really terrific progress in engineering work. And, actually, to that end, I wanted to take a moment and introduce to you Marco Argenti from Nokia who wanted to come on stage with me and talk to all of you, including some of the Symbian developers who I know have come to MIX to hear a little bit about Windows Phone and Windows Phone platform. So, Marco, why don’t you come out and say hello to our MIX audience here? (Music, applause.)

MARCO ARGENTI: Thanks, Joe.

JOE BELFIORE: Sure.

MARCO ARGENTI: I’m very excited to be here, thank you, good morning. Great to see so many developers and really I wanted to say that at Nokia we’re really, really committed to our partnership with Microsoft. And I think together we can build a fantastic ecosystem, and we have the scale today to reach hundreds of millions of customers all over the world.

We’re really working very, very hard right now to create the first Nokia phones for Windows Phone, and we’re all running really fast. As you see, I’m wearing my running watch. You know, the opportunity really to bring the creativity of all the developers to customers in over 190 countries that today already are downloading over five million apps per day and growing, over 112 mobile operators. We actually are big believers that mobile billing is a fantastic asset because every time we introduce mobile billing, transaction volumes actually go up four times. In certain countries, that’s actually by far the preferred way for consumers to pay for content. So, that’s really a great business opportunity and a great asset that we can bring to the partnership.

Today, there’s already a way to reach those customers if you develop with the Nokia application environment, and I think that opportunity then becomes, in the long term, a great opportunity to develop on Windows also for our Symbian customers. It’s really about great mobile products with your creativity that is brought all over the world to create stuff that is fun and useful, and I think also that is profitable for you.

So, in short, I think we’re in a fantastic moment right now. I’m totally excited about our partnership, and I think this ecosystem is something that will be actually beyond what anybody right now is managing. So, thank you for inviting me here.

(Crosstalk.)

JOE BELFIORE: Thanks very much for coming, Marco.

MARCO ARGENTI: Thanks a lot. Bye bye.

JOE BELFIORE: We appreciate it.

MARCO ARGENTI: Thank you. (Applause.)

JOE BELFIORE: So let me talk a little bit more specifically about some of the ways that we’re going to increase your opportunity. There’s creation and commerce and letting you build apps and sell them in more places, and then there are better ways for you to find apps.

First thing I want to sort of talk about is the “Mango” update this fall is going to support 16 additional languages beyond what we supported in Windows Phone 7. You saw the screen shot there of some of our Asian languages. We’re trying to really bring the typography and user experience sensitivity to these languages, in particular the Asian languages, and do something that’s going to be beautiful and attractive and well appreciated in the countries where these languages are spoken.

In addition, we’re going to increase the number of countries where people can buy apps or — excuse me — first I’m going to explain the countries where you can create apps. Today, you can create and submit apps in 30 countries, and we’re going to increase that to 36 countries. So, there will be a wider range of places around the world where you can submit apps to the Marketplace.

Second, as I started to mention a little earlier, we’re going to increase the number of countries where end users can buy apps from 16 on Windows Phone 7 today to 35 this fall with the “Mango” update. It’s a big stretch, big reach for us to get the Marketplace to that many more countries and increase the surface area into which you all can sell apps.

So, we’re excited to make that announcement, and we think it’s going to really help continue to boost the ecosystem and in particular working with Nokia, who is strong in a lot of these countries around the world, we think it’s going to be a great opportunity for all of you.

Next, what I want to talk about is some of the ways that we’re going to enhance the phone experience and the Marketplace experience so that once people get the phone they can get your apps. So, I’m going to move over here because I actually am going to begin with a bunch of demos.

The first thing I want to talk about is some of the ways that people are going to use the phone to find your apps. So, if we can switch over — there we go.

Today, actually, I’ll give you the standard demo caveat. All the demos I’m going to do are on a very recent build of our “Mango,” we’re trying to do wireless here on stage and many of you have your phones on, so expect a few glitches. It’s all among friends. I know you’d rather see recent code, so we’re going to try to show you.

So, this is a recent build of “Mango.” And the first thing I want to talk about is helping end users get to your apps better. So, imagine I’ve got a bunch of apps installed on my phone, of course I can set up Live Tiles. But if I’m installing 30, 40, 50 apps, I’m going to be more likely to use the app list over here.

And there are a few things that we’re doing to make it easier for end users to get to your apps if the list is really long. We automatically detect a long list now, and you’ll see here the letters present in the list, just like we have contacts, we’ve implemented a JumpList for apps. So, if you want to get to Twitter, you touch A, you touch B, boom, there’s Twitter, you can launch it. It’s super fast and easy. I’ve been using this on my phone, it’s surprising how addictive it is. Jump over there, hit a letter, launch an app.

So, small thing, nice enhancement, something we’ve heard feedback on and we try to improve.

I’ll give you another example along the same lines. You might notice right there there’s a little search button. I can now touch that and do text typing to search my giant list of installed apps. So, let’s say I want to launch my Amazon app, I can type “AM” — boom, it’s filtered right away to Amazon, and I can launch the app.

But even better, potentially, from your perspective, right there in the same list is a link that will help me as an end user find something on the Marketplace. So, I want to transition over and talk about how we’re improving the Marketplace for end users to get to your apps.

I’m just going to touch search Marketplace there, assuming our connectivity is working on stage. In a minute, you’ll see some of the things we’re doing to make the Marketplace better. Right away, you might notice a few things that are going to be a great enhancement for end users and help them find your apps.

I’ve done a search here and the first thing you see is it’s filtered just by apps, better than when we shipped Windows Phone 7, and there’s a lot more metadata shown here. The publisher is shown, the price is showing, the rating is shown, and this list is provided by top downloads, or popularity of apps. So, users can find the apps they want.

If what I was really looking for was music, then I can pivot over and see the music, kept separately from the apps. If I wanted to find a podcast, we’re going to have support for podcasts on the device in the Marketplace in the U.S. this fall. I can see all the podcasts that relate to Amazon and so on.

So, it’s a lot quicker and easier for users to find the stuff they’re looking for, and even beyond that, we’ve tried to improve the way people look at apps before they buy them and the buying process.

So, here I’ve opened up Amazon Kindle, which is out there on the Marketplace, and we’ve made this a little bit easier for people to navigate through. I can look at reviews in a full-screen pivot, I can pivot again and see screen shots a little bit better, easier to navigate, and in the spirit of trying to help users find, discover, and install more apps, we’re dedicating a whole pivot right there next to the details, giving rich information about related apps that you might want to get if you like this one.

So, those are some of the things we’ve done there. I’m going to click to install this. We’ve streamlined this process. For free apps, you confirm one time. We automatically navigate over to the app list or the Games Hub where the application is going to be installed. You see right there the Amazon Kindle app as soon as I get network connectivity is going to start installing. There’s a little progress bar, when it’s done, the tile will light up and the app is installed.

So, we’ve really tried to make this process easier for the user from the launching of apps, to searching for apps to finding them on the Marketplace and getting related apps suggested to simplification in buying and installing.

Now, I want to also broaden this and talk to you about something that we’ve done that’s a little bit different and in particular is a new feature I’m really excited to show you for the first time here.

As you know, one of the ideas behind Windows Phone 7 was to introduce Hubs where a user could go to one place and see all of the stuff related to photos or all of the stuff related to games or all of the stuff related to people or music. And just as a refresher course, I’m going to go into the Music and Video Hub here where you see there’s my music collection that I might have synched with my PC. If I pan over, there’s my history. This is music or video I’ve been playing. Whether it’s coming from our built-in experiences or your apps, you get to integrate with the Hub there in history and also new. Whether it’s our stuff or your apps.

Over here on the right — we have this today, but we’ve improved it a little bit in the “Mango” update. There’s a place for third-party apps to appear. So, if a user goes to Music and Video and they’re really using, let’s say, LastFM or Slacker, those apps are present and taking over the hub, and it’s one place a user can go to do all those things with apps.

This helps users find apps more easily than navigating through screens and screens of icons.

That was a sort of thematic idea for us, and what I want to do now is show you a brand new feature in the “Mango” update that is the same basic idea but gives access to your apps when users do searches. So, what I’m going to do actually is use our built-in multitasking UI here and pan over. We pre-set up a search over here. I did a search using the search button, typed movies and got a search result.

So, here I have my Web search results, my local search, and so on. And as you’ve seen in Windows Phone 7, we give a Bing Instant Answer here when I do a search for things like movies that tells me movies near Las Vegas, Nevada. I can touch that and this is a movie experience — assuming we have connectivity — coming from the Bing service that will show all the movies that are here locally.

Let me try this again. I’m going to plug in my USB cable here and see if I can get network connectivity that way. In fact, I’m going to do this search again. There we go. Let’s try that. OK, so I’ve done a search for movies. I touched the Instant Answer, I get a list of movies that are playing now. We’ll choose “Source Code,” that seems sort of thematically appropriate. And in the “Mango” release now, as you saw on Windows Phone 7, when you do a search and you find a restaurant, we have a restaurant card or a place card that gives you interesting details about a place.

And we hang functionality off of that. Well, here’s a movie card for the movie “Source Code.” I can pan to the right and see show times, or if I pan to the left, all of these cards which come with search results, places, movies, so on, now have a pivot for extras. Just like in the Hub, this is a place where we’re going to connect user actions on the phone with your apps.

Now, think about the scenario. I’m out on the town, I use my cool voice search, I say, “Movies,” it gives me a list of movies. I find a movie I’m interested in, “Source Code,” now I want to watch a trailer or see what the critics are saying or learn about the cast. Well, IMDB would be a perfect app to give me interesting information. Rather than leaving my search task, going back to a screen full of icons to find the IMDB app then launch it then navigate through the IMDB application, I can just from the movie “Source Code” launch IMDB extras, and using a deep link, IMDB jumps right to the relevant content for me so I can scroll down and see what the critics are saying, I can watch a trailer, I can pan over and see the cast and so on.

We think that this idea is going to help users get more value out of their app, enjoy their app and phone experience a lot better, and keep improving the phone in a way that delivers on that whole glance-and-go idea of getting you in and out and getting your tasks done more quickly.

So, we’re pretty excited about this new concept, search extras, and helping users get value out of searches and your app. (Applause.) Thank you.

So, that covers the first topic area of improving your opportunity and getting users to your app. Now what I’m going to do is spend some time talking about the platform capabilities, the kinds of things that we’re doing that are going to enable you to build better apps. I have three areas that I’m going to talk about. I’m going to talk about HTML5 in the browser first, then some core phone integration and then I’ll wrap up and introduce Scott to talk about developer experience.

So, let me come back here and I’m going to switch over and I want to talk about the Web experience. As many of you know, the “Mango” release this fall will include support for IE9. The phone has IE9 built in, and the thing that we’ve done that’s interesting is the core Web browsing engine on the phone that does HTML rendering and JavaScript and all that sort of stuff is the same code base moved over from the PC. The same exact code that has just shipped and is now getting installed on tons and tons of PCs is the code base that will be on the phone.

As a result — (Applause.) Thank you, yes, we’re excited about that too. As a result, the user gets benefit because they have a fast, high-performance browser engine, and you all get benefit because the way markup gets handled on the phone and the PC will be the same.

Now, it’s true that the graphics capabilities of phones are less than the graphics capabilities of PCs. Yes, you have full access to all that hardware acceleration, but the GPUs typically are not as powerful. There’s a little less memory, and screen size is smaller. So, it’s not like you’re likely to build one site for both, but there are still huge benefits in the markup and scripting being identical between the phone and the PC.

The other thing that’s beneficial for you and for end users is that HTML5 standard offers lots of great stuff that’s going to make websites work in a smooth way on the phone.

So, first, I’m going to show you a couple of examples of HTML5 standards support in IE, and then I’m going to show you how the hardware acceleration helps with performance.

So, the first site I have loaded here, this is a website. I’m looking at IE9 on the phone. We’ve moved the address bar down here to the bottom so we can dedicate a huge amount of screen real estate to your content.

The first website I’m showing is a sample HTML5 audio website. So, imagine I’m browsing on the Web, I’m looking at an audio service, I can move over here and I’ll touch the play button. Hopefully you can hear this, we have a little mike on stage. I’m now playing audio. I can press the skip forward and play different audio, all of this with standards-based HTML tags.

If I press the start button and navigate out of the browser, just as we now will support background audio for native apps, we support background audio for HTML5 coming from the browser. (Applause.) Oh, thank you, glad you like that feature.

Also, if I’m on the start menu, I can use the phone’s volume controls to pause the HTML audio, so that’s the first example. The next thing I’m going to do is I’m going to switch over — I’ll show you, here is the tab UI.

I want to switch over to a real-world website. This is not a sample that we put together or built up or anything, this is Boston.com, the website of the Boston Globe. And what the folks at Boston.com are doing is using HTML5 video tags to have standards-based support for video on their website. And on our phones, that’s great because it lets the user go straight to the Boston website. I’m going to navigate over here, press play on the video, and you see H264 video natively supported on the phone. It’s quick to load, it streams very nicely. We’ll get the actual quality here. You see the quality of the video looks great.

We provide a full-screen viewer with user controls right there for pause and resume. And if I go back, of course, I’ll go right back to the website.

Also, actually, while I’m here it’s worth mentioning, I’ve seen people posting on blogs with IE today on Windows Phone, some people are disappointed that in landscape mode you don’t get the address bar there on the bottom. Well, you can see here in IE9 on “Mango” we’ve fixed that. When I go back to portrait, it rotates around and moves back to the bottom.

So, that new design helps make the landscape and portrait experiences a little bit more consistent.

OK, one more demo. Now I’m going to switch over to favorites and I’m going to move this USB cable out of the way. Hopefully I keep my connectivity. And I want to show you an example of the performance benefit that some HTML5 sites are going to get with the hardware acceleration that’s built in.

So, let me bring some friends on stage here alongside our Windows Phone. I’ve got an iPhone 4 and Nexus S and Windows Phone 7 running IE9. Remember, same IE9 core browsing engine with hardware acceleration onto the phone.

So, what we’re going to do is we’re going to load HTML5 speed reading demo on all three of the phones. So, let me try to get these ready. And I’m going to do my best to launch them basically simultaneously. In fact, I’m just going to give the iPhone a little headstart here because I have only two fingers.

So, we’ll start the HTML5 speed reading there, and now I’m going to start these two at the same time, ready, here we go, one, two, three, go. And let’s watch these browsers in action.

Now, down at the bottom what you’ll see is a frame rate. And you can see the hardware acceleration gets the Windows Phone frame rate up to 23 here. Android is going at 11, and the iPhone is at two. And you see the page has loaded, there we go, Windows Phone 7’s browser is finished rendering the page. The Android phone is almost there, and the iPhone is a little ways behind. (Applause.).

Thank you. We’re really very appreciative of the work that Dean and his team have done on the PC to introduce HTML5 standards-based native browsing with hardware acceleration, it’s good for the PC, it’s great for phone users too, and we think it’s really going to make a lot of end users and a lot of you really happy this fall with that capability on the phone.

So, that gives you a look at the browser. Now, let me talk about some of the platform capabilities we’re adding for native apps to get richer and better, and in particular I want to talk about the topic of phone integration. There are a lot of things we’re doing in this release to enable you to better integrate your apps with the data and the services and the sensors on the phone. So, there are three categories of work we’re doing that relate to phone integration. The first is user experience enhancement. So, there are quite a few of these. We’ve improved the panorama and pivot control so that your apps will look, feel, faster and better and more like the native ones on the phone.

We’ve done a lot of work on Live Tiles. We’re enabling multiple Live Tiles per app, animations, ways for your code to update Live Tiles in the background without push, you’ll see more of that later. We’re enabling you to access the ringtone setting so all of you can innovate and get ringtones out to users. So, that’s sort of user experience kinds of things we’re doing.

In terms of core support, there’s a long list of things that we’re adding for you in this release. There will be support for TCPIP sockets. We have a built-in SQL database that you can use. A lot more launchers and choosers — (Applause.) yeah, thank you. A lot more launchers and choosers and better data access to the kinds of things that people pick when they use the launchers and choosers and data access to the contacts and calendar store on the phone so you can more richly integrate your apps with the users’ data. (Applause.) Thank you.

Then last there’s a bunch of work we’re doing to let you integrate with the sensors on the phone. (Applause.) Thank you. You’ll be able to get access to the raw camera data, we’re adding support for the compass and optionally devices this fall will have gyro and there’s some cool stuff we’re doing there that I’ll explain later. So, having made these announcements, let me walk over here and give you a few demos of some of these things so you can get a sense of how they work.

Oh, yes, sorry, I forgot. Before I go to the demos, a great example app that we’ve been excited to work with a partner on is Skype. If you think about Skype as a whole experience, it wants to have great networking support, it wants to be able to have access to the contact data on the phone. It wants to have a user experience that integrates and looks nice, and we’ve had a terrific working relationship with the folks at Skype, and we’re excited to see Skype come to the platform this fall when all these sorts of additional enhancements are available.

So, I wanted to mention that, and now I will switch over to some actual demos here. The first thing I’m going to do — if we can get that on screen. The first thing I’m going to do is show you a demo of a ringtone example. I mentioned that we’re adding support to the platform for you and your services to take audio files and set them as ringtones. So, I’m going to give you a quick example of that here. So, here you can see I’ve run an app, it has a bunch of pivots for ringtones that you can imagine are out on a Web service. There’s one interesting example here that might be appropriate for my ringtone, let’s see if I can play the preview. Yeah, that would be appropriate for me this week.

So, let’s say I want to make that my Windows Phone ringtone. I can just touch, the file is downloaded from the Web service, and now the ringtone API is used where I can specify I want this to be set as my ringtone or simply added to my ringtone list, voila, it’s done, it’s now a ringtone, and we hope that this is something that all of you will embrace and go do lots of cool stuff with. So, ringtones.

The next example I want to show is an example of an Amazon shopping app and how it can take advantage of some of the new things that we announced using sensors and user experience integration to make those sensors work in a slightly interesting way.

So, I’m going to launch the Amazon shopping app here. And let’s say I’m going to do a search. What I have here is a book. We’re going to switch over here. This is “101 Windows Phone 7 Apps.” This is written by Adam Nathan, who actually wrote 101 Windows Phone 7 apps and then wrote a book about how you write all those apps. And actually, I think you can find this in the store out there. But really, let’s take a look and see if it’s less expensive on Amazon shopping.

So, I have the Amazon shopping app here. And I’m going to do a search for a product, but instead of typing the name of the book in, I’m going to use the new support for the camera feed to do a barcode scan. So, I’m going to start that up. You can see here I’m just going to hold it over the bar code, it’s already recognized, and there you go — actually, I’ll put this right over here. Amazon has recognized “101 Windows Phone 7 Apps, Volume One,” it’s $32 on Amazon.

Now, you’re saying that’s cool, we’re glad we have that, but the thing that we’re doing that’s kind of interesting here is when you think about this user experience and scanning a barcode, in that particular case, I can launch the app, navigate around, find the UI for scanning a barcode. Useful, but not particularly convenient.

We thought we could do better. We’re the glance-and-go platform. So, what we’re going to let you do is pin Live Tiles that deep link into your apps right on the home screen. So, here I’m going to choose pin to home, and what I get right there is a Live Tile that jumps right to the barcode scanning. So, maybe I don’t actually need this one anymore, so I’ll get rid of that one. And instead, now, if what I am — I’m an active barcode scanner in a store, I just touch that, watch how fast this is going to launch, it goes right to the barcode scanner, and now the camera is up and I’m scanning my barcode. Just taking one step, not requiring a user to navigate through a whole bunch of stuff to get the barcode scanning functionality. So, we’re excited to see the kinds of things that you all will do with that functionality this fall.

Now, let me talk a little bit about sensors. A lot of phones have access to sensors, and we’re adding more of it this fall, but there’s one thing that we wanted to try to do that would enable all of you to write apps faster and enable more people to write great apps with sensors.

It turns out that when you have a compass and a gyro and you’re trying to position objects in the real world, getting a big, raw stream of data can be battery consumptive, and we wanted to sort of make that work a little better. It also requires some pretty tricky math to get right, and that increases the length of time that it will take developers to write apps and maybe disqualify some people from writing some kinds of apps.

So, we spent a bunch of time with Microsoft Research who had been trying a bunch of things out, and we’re excited to announce a capability of the platform we call motion sensor, which integrates the compass and the gyro in an interesting and simple way to make it very straightforward for you to write motion-oriented apps. And I’m going to show you anyone example demo of that now.

So, let’s switch over here. We worked with the folks from Layer, you’re all familiar with Layer? I’m going to launch the Layer app here. I don’t know if you can see that up there on the screen. Layer does augmented reality and lets people create layers. So, we created a sample layer of the location of tweets at MIX.

So, I’m going to launch that, and now you’ll see we’re switching over into an augmented reality mode. I’m going to turn it this way and aim it at the audience. By the way, don’t bother tweeting because this is sample data. And you can see as I scan around, the position of those tweets in the audience is shown right out there. We’re panning around and, there you go, you get a sense for how that works. They’re staying in place and this is using that motion sensor API to get this up and running fast and easy, now available to all of you developers — well, not yet — this fall available to all of you developers on Windows Phone 7. So, this we’re excited about and we hope you’ll all take advantage of that as well. (Applause.)

OK. My last topic in terms of platform capability is multitasking. I’m sure you all paid attention back in February at Mobile World Congress when we announced a few things. We announced Twitter support, People Hub. We announced IE9 and we announced support for multitasking. Well, today at MIX we’re going to give you a whole lot more information about how some of that multitasking words and some of the unique features that our multitasking solution involves.

Basically, what we sat at Mobile World Congress was close to what’s on the screen now. First, we’re implementing fast app switching. So, when a user navigates away from your app, it’s just suspended and not put away totally. We try to keep it there as long as we can, as long as we have memory available, so if the user goes back, we wake it back instantly and they get an instant-resume experience. So, that’s a nice thing.

Second, our philosophy is really to try to make sure that the battery performance stays very good on Windows Phone. So, we’re implementing a number of common multitasking scenarios in the OS so that they can be handled on your behalf. Background audio is a good example, a file download service will be part of the platform that you can you and will guarantee file downloads. Alarms so users can see pop-up notifications. And then there’s one other thing we’re doing which I’m going to talk about in a minute.

But before I do that, what I want to do is switch over to a demo and show you a couple of real-world apps that are now in progress and going to take advantage of some of those multitasking features. The first one, background audio, I’m excited to announce in demo for the first time Spotify coming to Windows Phone 7. (Applause.) Yeah, we’re pretty happy about this too. Those of you from Europe I’m sure are very excited, Spotify being very popular over there.

So, here’s the Spotify app. I can pan around, you can see it’s a nice panorama. I’m just going to do this quickly. I’ll go into an album, the track list is shown. I can click a track and start playing it. There you go, you can hear music is now playing and the thing that our users will be excited about is that when I navigate away and go into other apps, check my email, all these sorts of things, music continues to play. And if I use the volume controls there, you can see they’re available and I can pause, skip tracks, the track information is shown and so on.

So, we’re excited to have Spotify coming to the platform and we know lots of end users will be as well. And we’re excited to have you use some of those multitasking APIs in your apps.

Next up, I wanted to give you a quick look at fast app switching and our multitasking UI. So, I’m also excited to show you for the first time — I’m going to pan back here, and you might recognize there in my back deck the familiar UI of “Angry Birds,” which we’re showing for the first time here at MIX. Perfect. I’m excited to tell you and everyone else who is following on the Internet will be available on our Marketplace on May 25th. That will be available for Windows Phone 7 users and then of course it will take advantage of these multitasking capabilities for the “Mango” release coming later.

Now what I want to do is talk a little bit about a unique thing that we’re trying to do in our multitasking approach. If you think about the problem of how to enable arbitrary code to run in the background, that gives some flexibility for user scenarios, but it has a big down side in terms of its unpredictable effect on the battery. The Android platform, for example, has arbitrary third-party code running in the background, and it’s quite typical for users to need to do task management on their own or figure out what apps are doing what and affecting the battery.

We wanted to enable a lot of the scenarios of background code, but we wanted to do it in a way that could protect the battery. So, there were a few things that we thought about in combination. And what we’re going to describe today is a new thing that has background agents which you write as part of your app, a separate part of your app that we schedule in a battery-friendly way, sometimes periodically running for a short period of time when the user is on a battery, and giving you events like when the user plugs into power and has Wi-Fi that you can go feast on that data and that power. And we let these things update tiles and do things in the background. We call these live agents. They are our attempt to balance those two problems of letting you run code in the background, but make sure the user has a highly predictable battery experience where they don’t have to actively, personally manage running processes.

Later on in the talk today, we’re going to describe this in more detail, and in some of the talks later today, we’re going to show you the very simple setting UI where a user can go change the policy of whether a particular app live agent is running in the background or not.

So, what I want to do is show you one last demo example of an app that actually is bringing together a bunch of these multitasking and platform integration concepts to give sort of a rich notification experience for a common scenario. So, I want you to imagine I have some flights. So, I’m going to go to Australia. So, I’m going to be flying Qantas. And down here at the bottom of my start menu, I’ve already added one tile for my first flight.

You can see there the Live Tiles telling me I’m on Qantas flight 3112, leaving Las Vegas from terminal 1D, you saw the animation, that’s new, at 10:50 a.m., and it’s green, it’s on time, I’m not late, all is well.

But I’m going to scroll up here and actually launch the Qantas app and I’ll show you the app and how we’re going to use these live agents and other capabilities to give me a good experience as a user wondering what I need to do for my flight.

So, here’s the Qantas app. You can see there’s my frequent flyer info, there’s the flight that I have this morning leaving Las Vegas going to Los Angeles. And here’s a flight that I have tomorrow night leaving Los Angeles and going on to Melbourne, Australia.

Well, it’s almost time for me to go, so I’m going to take this and actually pin it to my start menu as well. So, now I have two tiles for the Qantas app, each representing something that I care about, my flight today and my flight tomorrow.

And you can see they’re giving me different notifications. That tile was telling me check-in is now open for my flight tomorrow. OK, that’s useful information. Let’s say I’m going to go back in here and I’m going to go through the check-in process so I know I’m checked in.

I touch the Live Tile and as you saw in other examples earlier today, the Live Tile links deeply into the app, I can choose check-in now, I won’t bother to do that, I can look at my arrival or booking info and so on. Imagine that I checked in, time has elapsed. It’s now important for me to get to the airport, I’m here on stage with you, when really I have this 10:30 flight, what am I doing? In a few seconds, the app’s going to notify me, hey, here’s an alarm, taking advantage of this multitasking capability, you’ve got a flight, you should probably be thinking about getting going, you’re within this time window.

In fact, if I scroll down now and take a look at those Live Tiles, you see this first flight has turned from green to red because it knows my location, it’s background live agent, it’s checking my location, checking the time saying, OK, where you are, based on the time, you better get going to the airport or you might miss your flight.

The red tile’s giving me as an end user a cue that I might want to do something about this. So, I’ll touch it, launch right into the Qantas app looking at information about that flight, and now the flight — the app is handling this case for me. It says not sure if you’ll make it, alternate flights are available. I can expand that and see what other flights, I could change my flight to one of those, or I might decide I’ll leave right now and I probably have a chance to make it, in fact, this app is going to help me out by showing one of our new launchers, which is the ability to jump right into directions.

So, if I’m going to head to the airport, I can get very clear, step-by-step directions to get me there. The app is using all of these capabilities, new Live Tile features, multiple Live Tiles, deep linking, live agents, a background notification to make my experience of figuring out what I need to do to get to the airport a lot better.

So, thank you. (Applause.) So I’m going to wrap up my demos and my section of the talk with that. Trying to summarize all those benefits. Really, what we were trying to do and what I talked about was give you better opportunity to connect with and find users and sell your apps and give you capabilities to build richer, better, more compelling apps that integrate in a deep way with the unique phone experience on Windows Phone 7. Thanks a lot for your time. I’m going to introduce Scott, who is now going to — before I do that, I forgot, one quick thing. I know, on your minds, when will you get the tools? And so I’m going to tell you, we will have these tools available for you next month. We don’t quite have them yet. We have the last bits of polish we’re putting on them, as Brandon Watson likes to say, I think you know Brandon Watson, we’re waiting on the delivery of unicorn tears to get the tools out to you. But they will be free, they will be online, they will be complete, they’ll work well, and we’ll let you know as soon as they’re available.

With that, I’m excited to introduce Scott to come back out and talk about the developer experience, the tools, and how you go about building all this stuff. Thanks very much for your time. Scott Guthrie. (Applause.)

With that, I’m excited to introduce Scott to come back out and talk about the developer experience, the tools, and how you go about building all this stuff. Thanks very much for your time. Scott Guthrie. (Applause.)

SCOTT GUTHRIE: Well, thank you. It’s great to be back here. Joe just finished talking about some of the great user experiences that we’re bringing to the next version of Windows Phone. I’m going to spend a little bit of time now talking about how we as developers can build great games and applications for it.

The development tools for Windows Phone today are the best on the market. And with this new release, we’re going to be making them even better. We’re introduced a ton of new capabilities that we’re pretty excited about. And what I’d like to do is actually just start off with a quick demo of some of them.

So, I’m standing here in front of a machine that has Visual Studio 2010 Express for Windows Phone on it. This is actually the new version that has the “Mango” tooling built in. And I’ve got a game here which is called “Marble Maze,” and I’m going to go ahead and quickly run this application on it.

One of the things that’s nice about the Windows Phone tools that we have today is that it has built-in emulator so that you don’t actually have to have a physical phone in order to do development. And so I launch this game, load it up inside the emulator — does everything the phone does. I’m going to actually change the orientation a little bit so it’s easier to see. And what I have here is this “Marble Maze” game that’s in the Marketplace today loaded in it.

I can hit play and three, two, one, the game will start playing. Now, this is a nice game — use the accelerometer on your phone, and you kind of try to make sure the marble doesn’t fall in the hole.

That works great on a physical phone, but one challenge that you have today is that with an emulator, it’s pretty hard to actually simulate game play as part of that.

So, one of the nice things that we’re adding with the “Mango” tools is a new additional tools option here that brings with it a couple of additional capabilities to the emulator, one of which is accelerometer support.

So, what I have here is the 3-D version of my phone. See here now within the emulator I can actually control. (Applause.) I’ll be honest with you, last night I was getting ready for the keynote — oh, this was the hardest part of the keynote I was stressed about is avoiding the holes. Anyway, you can see here lots of cool things you can do here.

You can also play prerecorded gestures. So, for example, this particular game has a shake gesture to reset the game. You can just click shake, hit play, and it can actually simulate that gesture immediately within the tool.

So, it makes it much easier now for you to go ahead and simulate the accelerometer within the emulator, you don’t need to have a phone in order to do it.

We’ll take a look at a couple other benefits that we have here. I’m going to load another application and show off some more emulator features. I’m going to load a really cool application here called 4th & Mayor. How many people have tried this game for this app out? (Applause.) Jeff Wilcox is here at the event, it’s his application, but phenomenal Foursquare application. It’s location-based, so basically you go to a location, you can check in with Foursquare — really, really nice application.

Now, from an emulation perspective with the emulator we have today, the downside, of course, is we don’t really give you an easy way to simulate a location because you’re always at your PC. And so for an application like this, you end up having to write a little bit of code to do your simulation for GPS.

What we’ve done with the emulator is added support now so that you can click on a location tab on the additional tools, and you can now pick where you are locationwise.

So, if I want to pick somewhere in downtown Seattle, I can just click on the emulator. Notice we’ve just updated the emulator so that it thinks it’s now in Seattle. (Applause.)

So, actually pretty cool in the sense that we can search as well within this map. So, we’ll search on Mandalay Bay. Let’s pick something in the middle of the casino here, and you can see, we load up there, I can go up the strip, I can pick Bellagio, update, gives me the ability to easily do real-time testing throughout my application. And just like with the accelerometer, I can also load predefined locations from an XML files. So, you can see here, I’ve got six locations on the map. I’m going to say, “Hey, let’s walk this strip every two seconds; let’s change locations,” and now I can simulate multiple locations all at once directly within the emulator. (Applause.)

So, those are a couple of benefits that are coming as part of the emulator. Let’s now switch gears and talk about some of the other tooling benefits that we’re pretty excited about. One in particular that I’m really excited about is our new profiling support that we’re building into the phone tools. This will be part of — all the features I’m showing here today are part of the preversion of our phone tools, and this profiler stuff that I’m going to show is pretty darn cool. It basically allows you to very easily identify hot spots within your application, pinpoint problems, and it’ll actually walk you through the code that you need to fix in order to address them.

So, what I’m going to do here is just sort of show things off. I’ve got an application here called a home advisor. I’m going to run this application, it has some performance problems. And what I can do inside Visual Studio now is just hit start Windows Phone performance analysis, and it’ll bring up this little dialogue here. And this dialogue basically allows me to pick, do I want to measure execution time, in which case I’ll measure GPU, and all the visuals are being drawn, as well as my code execution.

We’re also adding support for memory allocation so I can track on a per-object basis how many bites of memory are being generated, who’s holding onto what, and optimize my memory as well.

For this particular demo, I’m just going to measure execution time. I’m going to launch the application, and assuming my 30-foot USB cable back stage is working, yes, it is, we can switch to the camera here.

You can see we just deployed this application onto a real phone, and it took a little while to load up. That’s one of the performance issues we’re going to need to investigate. And it’s a simple app and, again, it says performance problem. So, if you notice my list scrolling is not as smooth as I’d like it to be. And if you notice, when I go ahead and slide, it’s really dirty.

So, we’ve got some performance problems in this app we want to go ahead and identify how to fix them.

So, we’re going to pull out of the app, we’ll switch back to Visual Studio now. And you’ll notice Visual Studio is gathering all the data from that particular phone. It’s going to analyze and parse the log files, and then it’s going to generate for me a report with a whole bunch of information about what was going on.

You’ll notice it captures a ton of data. It’s looking at frame rate, and it’s telling me frames per second throughout the lifetime of the app, how it’s doing. 60 frames per second is pretty smooth. You’ll notice the downside here — we dropped to about 15 frames per second, that’s really bad. 30 frames per second is kind of on the edge. And you’ll notice that we’re doing certain things like list scrolling, we’re a little jagged there, and so there are some issues that we need to fix to make our list scrolling much smoother.

We’re also capturing CPU utilization. The green indicates the work you’re doing on the UI for it. We capture memory usage throughout the application; we can look at animation storyboards, which ones are firing. We can look — whenever you’re loading an image from a network, and we can also capture garbage collection events that are happening during the lifetime of the application. So, just a ton of data.

What’s cool about this data, though, is not only do we just sort of show you the data, but we allow you to easily analyze it. So, for example, let’s look at the startup time issue that we noticed. I can highlight any of the timeline slices within the tools, and it will go ahead and analyze the data, and it provides kind of some helpful learnings telling me what we think the problems are from a performance analysis perspective for that particular time slice.

And so in this particular time, it’s saying, “Hey, you’re spending a lot of time on the UI thread doing something, a lot of execution going on.”

What’s nice, then, is I can drill in to see exactly what we’re doing. So, now I’m going to go ahead and drill in, and I can see precisely where I’m spending my time, what methods I’m actually calling and what methods are on the call stack, how often both inclusive and exclusive they’re actually running. And you’ll notice here there’s a bunch of system code running, and there are two hyperlinks within this list, one for new listings and one for parse j-phone.

Let’s click new listings, and it’ll jump directly into my code. Just saying we’re spending a lot of time in this method, this property. This property is calling the parse j-phone method, let’s drill into that. And you can see here as part of my application startup, one of the issues it’s kind of identified is this particular method we’re spending a lot of time parsing a j-phone request coming in from the network. This is a good example of where because we’re actually doing it on the UI thread; we’re blocking both the UI from rendering and the application from fully starting up.

What we probably want to do is put this on a background thread, which would make it a lot more efficient, and make my application start up a lot faster.

And so very quickly we’re able to identify this is kind of a hot spot, and we can go ahead and change our code and actually make it a lot faster.

Second issue I want to look at is that list-scrolling issue on the houses. And so I’m going to highlight this region right here. Again, you’ll notice at the bottom here, update space on the data that we analyzed, and it’s saying, “Hey, you’re spending a lot of time in the visual tree in validating certain things and drawing things.”

Let’s actually drill in to understand why. And so I can actually, directly from the profile now, analyze on a per-frame basis what I’m doing on every frame inside that application that was drawn. So, you can see all the frames. I’m going to sort it on CPU time. You’ll notice on frame 170, we’re spending an awful lot of time doing something. I can drill into that, I can see the frame rate, frame count, how much time is being spent on the GPU versus the CPU.

If I want to drill in further, I can go ahead and look at the function list. Then we’ll sort it. I can look down through my code here and — there’s the method called convert that’s in my code. Drill in. And you’ll notice I have a sleep statement accidentally in there. If only all those problems were that easy. But anyway, you saw how the tools can quickly identify where you’re spending your time, again, precisely pinpoint the problem and give you a lot of data on how to fix it.

Last thing we’ll then look at is this panorama scroll at the far right where it was really jerky. And I’m going to highlight that region. Again, it’s going to give me a couple of warnings and some summary information, CTLs. One of the things it’s saying here is you’re spending a lot of time doing animation. And the animation you’re doing is bumping you off of the GPU into the CPU and then back again, and that’s what’s tainting our frame rate.

And if I go ahead now and actually drill into the story boards here, I can actually see what are all the story boards that are executing. I can actually see the entire visual tree of all the controlled renderings. And you’ll notice here I have a color animation using key frames. I can click on that. And this is actually going to take me directly into my XAML. It’s showing me that I have a forever animation that’s running, and it’s a color animation. If you look at the warning, it’ll basically say, this is going to cause you to have to bump off of the GPU into the CPU to do the color calculation and then back. You can replace this with a different type of animation and get much smoother animation as a result.

So, three sort of simple examples of how you can use the profiler to kind of pinpoint real-world problems inside applications and not only just get data, but more importantly, kind of identify how to fix them and make your application a lot faster.

So, lots of other improvements. (Applause.) Lots of other tooling improvements, but hopefully that gives you a sampling of just some of the great things that are coming. Again, the great thing is all those features are available within the free tools.

So, I just showed you some of the new profiling tools that help you identify hot spots within your code. We’ve also spent a lot of time with this release, though, optimizing our own code as well and doing a bunch of optimizations that are going to make your apps work faster without you actually having to change any code to take advantage of them.

In this slide right here, it just has a list of four specific performance optimizations that we’ve made in the “Mango” release that you get to take advantage of, again, without having to change any code within your app.

I want to spend a little bit of time just talking about each one of them. And I’m going to show a video, kind of a before-and-after, one running Windows Phone 7, one running a recent build of “Mango” to kind of highlight the performance differences and improvements that you get.

First one here is one that a lot of people have struggled with today, which is how I get really smooth scrolling. And in general, on mobile devices, list scrolling is a hard thing to get right. We’ve made a bunch of optimizations as part of the “Mango” release, which makes smooth scrolling and user input a lot more efficient. In particular, we’ve taken user input off of the UI thread, so we’re now doing it on a background thread, which means that it never gets interrupted when you’re doing a scrolling, and you get kind of immediate input stock notifications, and when you scroll up and down, it’s kind of buttery smooth.

And what I want to do is just show a video here of an existing application running both on Windows Phone 7 and on “Mango,” no code changes, and you can see the difference. So, left-hand side you’ll notice that when you scroll and you stop, it takes a little while to stop; “Mango” is instantaneous. (Applause.) Notice here as you’re moving up and down, the UI thread is pretty busy; on “Mango” totally smooth and immediately responsive. And the great thing is you don’t have to change any code in order to take advantage of it. That’s just a standard list box, and you get those benefits for free.

The second optimization I want to quickly show a video on is image decode. With “Mango,” we’ve optimized image decode, and the networking stack, so that if you’re loading images from a network, it no longer blocks the UI thread as you’re doing it. This makes infinite list scrolling scenarios, where you’re bringing data over the network much, much better. And, again, here’s another example of an existing application that has some problems with that, and how it gets a lot better with no code changes in “Mango.”

You’ll notice “Mango” loads much faster to begin with, but more importantly, when you reload that data, you’ll notice that with Windows Phone 7 today, it basically freezes as its getting the new data, and it’s a little chunky. With “Mango,” immediately responsive, even if the data hasn’t already come in, and then basically as it comes in, even if it’s gotten high latency on the network, it will still stay super responsive. (Applause.)

With “Mango,” we now have a generational garbage collector that allows us to garbage collect memory without pausing the application. This enables much faster application startup and much smoother interaction, especially with gaming. So, if you’re allocating lots and lots of memory instead of having a tiny pause now and then, it should basically be constant and smooth throughout. Again, here’s another video that highlights those benefits.

(Video segment.)

We’re launching two applications, they’re using XNA, and they are allocating a lot of memory for graphics and buffers upfront. Notice how fast “Mango” starts up, and you’ll notice as gameplay starts that it’s silky smooth. You never actually see any buffers, pauses, even if you’re allocating lots and lots of memory.

The last setup thing I want to talk about is memory usage. We spent a lot of time with this release optimizing memory usage, and in particular making your own code, our own code, a lot more memory-efficient. Generally, we’re seeing about 25 to 30 percent memory improvement wins on user applications when they’re running. And, again, you don’t have to change any code to take advantage of it. And here’s a quick example of that in action.

So, here’s the Facebook application. And you’ll notice “Mango” loads a lot faster. And we’re going to go through a couple of scenarios. We’re loading lots of messages as part of this. We’re going to scroll through and just sort of show off a bunch of different screens. The exact same code in both. The “Mango” version scrolls smoother, but even more impressively is when you’re actually at the end of the scenario, look at the total amount of memory being used by that application — it’s about 30 percent smaller with “Mango.” Again, no code changes, you just get to take advantage of that for free.

Again, as part of kind of the multitasking benefits that Joe showed, using less memory means it’s more likely that your app will stay loaded inside memory when you’re multitasking.

So, a couple of those performance improvements, and the great thing about those is you get to take advantage of them without changing code. There’s a lot more performance improvements we’ve also made as part of this release that you can also benefit from, and obviously the tools of the Profiler will help you pinpoint hotspots within your own applications and improve them even further.

Let’s switch gears now and talk a little bit about capabilities. Joe highlighted a bunch of the capabilities as a part of this new release. “Mango” includes almost 1,500 new APIs, and here are just a few of them. They include a bunch of APIs that expose phone-specific capabilities. They also include the full Silverlight 4 API feature set.

I would like to spend a little bit of time just kind of looking at a couple of these capabilities in more depth. The first one I’m going to talk about is database. So, “Mango” now includes a local SQL database that allows you to save and query data within your application. We also now support using Lync, so that you can query against that database, and we do provide in the box the full object-relational mapper that allows you to manipulate data and save it back into the database at any point.

And I would like to pass it over to Jaime here, who is going to go ahead and show us an example of that in action.

JAIME RODRIGUEZ: Thanks, Scott.

Good morning. Today, I’m going to show you how local database is going to help an existing Marketplace application. This application right here is Speed Reader; hopefully some of you have seen it. It’s a very popular RSS reader. And it’s got very great functionality that allows me to manage my feeds. I can organize them. It has really great navigation, and it’s got Twitter and Facebook integration. And my favorite feature is actually off-line support.

So, every time I go on a flight, I usually click the sync button right here. It downloads all my feeds, and it allows me to read them while I’m 100 percent off-line. It actually marks them as read as I go. And then, when I get back online, it updates the server and says, “Hey, he read all these items,” so I don’t have to read them again.

Again, it’s a Marketplace application today, and if you think about it, implementing this off-line capability was a bit of work. We have the feeds downloaded into isolated storage piles, we have all this metadata we’re keeping track of, and there’s a lot of stuff to keep track of in the current 7.0 release.

In “Mango,” what we’ve done is, we’ve actually migrated it to use SQL CE and local database. What we found out first was our code decreased by 30 percent or so in our persistent layer. So, we have a lot less code, it’s much easier to read, a lot less stuff to keep track of. And another thing that we’ve done is, it’s allowing us to introduce new capabilities, and new features that customers have been asking for. For example, search is the No. 1 feature that people have been asking for, and here you see the code, and in isolated storage search was not practical. We couldn’t parse all the files, go surf them and do everything we needed in order to implement search.

Using SQL CE, it is this one line of code right here to implement search. It’s just one link to SQL statement, and as you can see here, we can even get fancy. We can go in there and say, here, search title, or search description. We can do order by; we can do group by. We have much, much more powerful query capabilities that the database is doing for us. We don’t have to do any of that work.

So, here, let me show you. This is a new search feature, and I’ll just go there and say, search for WP7 Dev. Again, I get fancy with my searches, so I can search titles only, and there you go. Search is coming in. No code. One line of code. That would have taken us hundreds of lines of code before.

Thanks, Scott.

SCOTT GUTHRIE: Thanks.

Support for network sockets is also now built into “Mango.” This enables a bunch of interesting networking scenarios, uses standard network socket APIs, and let’s see another example of that in action.

JAIME RODRIGUEZ: We had a planning meeting a few weeks ago, and we were going to say, “Oh, we have to show sockets.” And we said, “How are we going to do that?” And Bob made a joke, and he said, “Well, for the last 10 years, every time I’ve introduced a new programming language, I do Hello World, and every time I introduce sockets, I do a chat application.” Everybody laughed, and we kind of dismissed it.

But, while I was at it, I just did a Bing search real quick, IRC Silverlight Library. And the very first results were these open source IRC.net libraries. So, we’re here at the meeting; the meeting is 90 minutes long. And I download the source, I click the project so it can run on Windows Phone, make a couple of line changes to add to the code, and there, before the meeting is over, within the next few minutes, I am round tripping IRC back and forth to an IRC server on my Windows Phone.

I showed this to Scott, and he said, “That’s it. We have a demo.” So, this is what you see right here. This is the IRC.net library in action. The exact same code. I didn’t tweak it. I didn’t do anything in there. For those of you that want to see some socket code here, we create sockets the exact same way we do in Silverlight. We can see this is a P2P socket. It’s all the code that you’re already familiar with.

After that, of course, I still had to go in and do a little bit of work to get from the UI. So, let me show you the application we came up with. We’re calling it WPIRC or Metro IRC because it’s kind of our Metro UI on top of IRC. Here are some active rooms that I’ve already logged into. These are automatically logging into all of these. I can see the top rooms where there is a lot of action going on right now. I can browse rooms and find rooms that I’ve never found before, that I haven’t logged in before.

And of course it’s all about chat. So, let me click right here to go into chat. Now, just so you know I’m not cheating, let me go here. And, this is live, this is MIRC, Live Windows Phone client, and I’m going to send a message back and forth, so I can go here and say, live from Windows Phone, hi. So, this is live. It’s real code.

Now, one thing that you can see here is the IRC client is actually taking advantage of the rich textbox control now that we have Silverlight 4 running on the phone. I get rich textbox for free. So, that was a big pain before; now we have it. And, of course, I have to reply back, so we’ll say hi back with a little title and send it. And it works.

The lesson from me: If you’re an ISV, if you’re a socket developer, it’s live, before I set it down. (Applause.) My takeaway, if you’re a socket developer today, these are the APIs you’re already familiar with. They work; they’re 100-percent compatible with Silverlight 4. It’s the exact same APIs we’ve had in .NET for the last 10 years. So, just go bring your code and start building a Windows Phone application.

Thank you.

(Applause.)

SCOTT GUTHRIE: It’s always great to get audience participation in a chat application in a live keynote. And there’s more coming onto the screen, I can see. So, we’ll see how the keynote is going later, in real time. That was a great example of how you can build an IRC chat application. And I think it’s probably the coolest-looking IRC application I’ve ever seen.

I want to introduce now Mike Roberts from Kik Interactive. They’re a startup that builds messaging application. It’s super popular. They’re a startup that builds messaging applications, super-popular, they gained more than 4 million users in their first year. And here’s Mike to actually talk about how they’re taking advantage of Windows Phone to do it.

MIKE ROBERTS: Thanks, Scott. Welcome to MIX, everyone. I’m Mike Roberts, head of mobile development at Kik Interactive. I’m here to give you a sneak preview of Kik Messenger for Windows Phones and talk to you about the amazing platform that it’s built on top of. Kik Messenger is a real-time, cross-platform messenger that went from zero to a million users in just under two weeks. We connect you to the people that are most important to you in real time, on any platform.

So, let’s go ahead and take a look at Kik. The first thing you’ll see is your list of recent conversations. My coworker Vic is down at the conference. So, let’s go ahead and open the conversation with him and see what he’s been up to. You can see exactly when our message gets delivered to Vic’s device, when Vic reads the message, then when he starts typing his response. That makes your mobile conversations that much closer to being a face-to-face experience.

Now, apparently Vic is backstage, and the great thing about picture messaging in Kik is that you don’t have to describe what you’re seeing. You can simply take a quick photo in your conversation, and put the person you’re talking to right there beside you. I think Vic is having a hard time finding someone to take a picture of backstage. He wants to be up here.

No? Prerelease code, so bear with me. So, the conference goes on all day. This is Vegas; we’re going to have to find something really good to do tonight. I know Ted is down for the conference, too, so we’ll add him to the conversation. So, we’ll add him to the conversation.

Now, Kik conversations, just like your real-world conversations, grow and shrink, so that you’re talking to the right people at the right time. Inviting Ted to the conversation is simple. Just tap the Combo Info button at the bottom of the screen, click the plus, and select Ted from our list of contacts. Just like that, Ted is now part of the conversation with us.

Now, being a real-time messenger is being a part of your real life. And we know you’re not going to be spending all of your time using our app. And for those times, Microsoft provides a powerful push notification service that’s dead simple to use. If you can connect to a Web server and send three lines of XML, you can add deep push integration into your app.

Let me show you what that means for Kik. If I have the app in my pocket, and I’m walking down the strip when a new message comes in, Kik will let me know. I can simply tap post notification, and I’m taken straight into the conversation with that person, and then I can get back to my real life so much faster. And all that power from just three lines of XML. Microsoft’s amazing developer tools let us build this application in a quarter of the time it took on any other platform, with half of the development resources.

We’d love for you to join the 4 million Kik users already using the application and discover the difference for yourself. Kik Messenger for Windows Phone will be available in the coming weeks. Thank you. Enjoy the conference.

(Applause.)

SCOTT GUTHRIE: Let’s talk now about camera capabilities. So, with Windows Phone 7 applications had to launch a separate camera app in order to take pictures. With “Mango,” applications can now directly control and stream content from the camera, and Ben is here to show us a cool example of it in action.

BEN RIGA: USAA serves 8 million members with its banking and insurance offerings. Those members are primarily men and women of the U.S. military and their families. So, by definition, by the nature of their job, they’re actually much more mobile than the average person. As a result, USAA has had to be very innovative in their use of technology in serving those members. They were the first to provide remote check deposit and have actually processed over $2-and-a-half billion worth of checks via smartphone.

Let’s have a look at how they’re taking advantage of a “Mango” feature to deliver that capability to the Windows Phone platform.

So, here you see, actually USAA already has an application in the Marketplace, but here you see that they’ve added the deposit at mobile function, which is the remote check deposit. I’m actually using Scott’s device here. It’s a little-known fact that Scott actually asked us to make contributions in return for onstage demo privileges. So, I’m going to make my contribution right here onstage.

So, let me add the check amount, accept that. So, obviously the most important part of the check processing is capturing the check image. So, I’m going to grab that; you’ll see that USAA is using the Raw Camera API. You can see there green guides that tell me exactly where to position my camera. And so that is very important because it allows me to grab an accurate and complete image of that check, so it can be successfully processed.

The next step here was to grab an image of the back of the check. So, let me do that. That’s important because we need to make sure that Scott has actually endorsed the check. You’ll notice here that I’m actually using the hardware. There is an onscreen camera button, but I’m using the hardware button because that allows me to focus, and again grab an accurate image.

So, now that I have that, I can actually accept and verify that I grabbed the right images. I’ll accept hat. And now that’s being uploaded to the USAA datacenter. It will be then forwarded on to the check processing facility. And it will make it into Scott’s account by tomorrow morning.

USAA is taking full advantage of the raw camera API to very quickly build an application that allows their members to capture check images and also ensures that those check images are accurate, and complete, and can be processed quickly. Checks in the bank, Scott. (Applause.)

SCOTT GUTHRIE: I love that demo. I wish we could do it all day. So, with Windows Phone 7, you could build applications in games using either Silverlight or XNA, but you couldn’t have an application that used both. With “Mango,” you can now build applications with a UI that contains both Silverlight and XNA at the same time. (Applause.) Which enables you to build even richer experiences. So, here’s a fun example of one of them.

So, I got an application here on a real Windows Phone, projected out using USB. And it’s a MIX-4 application that’s actually in the Marketplace, I believe, today and allows you to kind of see what’s going on at the conference.

This particular application has kind of a nice Easter egg added to it. And so if you click on the about, there we go, almost, there we go. There goes the about. You can see the about the particulars of the conference, and at the very bottom here it says about me. And if you click that link this is a Silverlight app, but it’s going to basically load in a surface.

The cool thing about this is that particular 3-D model is the full 3-D skeleton model, and it’s interactive. So, I can actually touch him and move him around. So, it is fully interactive 3-D. Down here, we have Silverlight controls. So, for example, if I wanted to I can zoom out, or I can zoom in. Anyway  (applause.)

Sorry for those of you in the front row.

Anyway, we’ll focus on a different part. So, full 3-D interactive, the cool thing is we have 2-D and 3-D on the same surface. I can use buttons here at the bottom. That was a slider built using XAML. If I wanted to, for example, this is kind of my daytime gig. When I go out on the town tonight, I’ll be looking a little different. If you see this character, that might be me.

And if you zoom out a little bit here. And when I’m down by the pool, looking a little bit more svelte there. This is actually completely anatomically correct. (Laughter.) Seriously, don’t make me prove it. Anyway, it’s all part of Windows Phone.

So, we’re really excited about the new release of Windows Phone. It delivers some great new user experiences, as you saw Joe show.

And for developers, it exposes thousands of new APIs, a bunch of new capabilities, some really great free development tools that allow you to build awesome experiences for it. So, we’re looking forward to talking more in the show this week about its capabilities, so lots of great breakout sessions. And we’re really excited to see what you build with it.

So, let’s switch gears now and talk about Silverlight for the browser. (Cheers and applause.)

Last December, we announced Silverlight 5 and provided a first look at its feature set, at our Silverlight Firestarter event. And the feedback was incredibly positive, and we’re really excited this week to be able to talk about the release in more depth.

Silverlight 5 contains hundreds of new APIs that open up a bunch of new scenarios and opportunities. And this is just a brief look at a couple of them. And I’m going to spend a little bit more time in the rest of the keynote showing off a couple of them in action and how you can take advantage of them.

Silverlight 5 makes premium media experiences even better. We’re introducing, as far as this release, hardware-based video decode. This means now that you can exploit the GPU, not just for rendering, but also for encoding your video on your machine. The benefit here now is even on low-end netbooks you can still now enable 1080p HD video playback in a smooth way.

We’re also, as part of this release, integrating with IE 9’s new graphic extensibility. And so we can actually integrate Silverlight and as part of the IE9 hardware-accelerated graphics experience. This enables you to get even smoother experiences on pages that contain both HTML5, and Silverlight together.

Silverlight 5 introduces trick play support for media. This enables variable speed playback of media content on the client, with what we call automatic pitch correction. It basically means you can speed up videos without people talking sounding like chipmunks. This is great for learning experiences. So, if you’re watching training you can speed up the training to, say, 1.5 times the speed, and you can still understand what the trainer is saying.

Silverlight also now can receive commands from a remote control. This is great for enabling a 9-foot living room experience. And we think a combination of these features, plus all the great media features we already have in Silverlight, really enables the richest media experiences on the Web.

And to showcase some of what’s possible with Silverlight, I’d like to invite Lieutenant Katie Kelly from the United States Navy Blue Angels onstage, to talk about the new website they’re building with it.

LT. KATIE KELLY: Hi, thanks everyone. It’s great to be here. I’m Lieutenant Katie Kelly of the Blue Angels. Since 1946, the mission of the Blue Angels has been to enhance Navy recruiting and credibly represent the Navy and Marine Corps aviation to the American public. The team is made up of 130 individuals, who support this mission. We have 16 officers on the team, six of whom fly the F/A-18 Hornet, and we have three Marine Corps pilots who fly the C-130, that we affectionately call Fat Albert, and along with seven support officers, such as myself.

We also have 110 enlisted maintenance personnel on our team as well. They come from 15 different job skills, and they come together to form a self-sustaining unit based on one of the founding principles of the Blue Angels, which is teamwork. It’s a real honor to work side by side these sailors and marines who are hand selected from the fleet to come and represent over 540,000 active duty sailors and marines who are serving throughout the world and forward-deployed right now.

From mid-March to mid-November, team travels to over 35 cities across America. And puts on 70 flight demonstrations. Our performance begins with Fat Albert, who takes off and displays the tactical flight characteristics of the aircraft. The diamond goes up after that. That’s jets one through four. And they represent the precision flying that the Blues are known for, and they will fly as close as 18 inches apart.

And then we have the solos who will go up, and they demonstrate the max performance of the FA-18. We reached over 10 million people a year who come to our air shows, but we really want to reach other people as well. And that’s where our official website comes in.

And with the help of Microsoft, we are really excited to launch a new website — a state-of-the-art website — this year, to effectively accomplish the mission of: one, inspiring young people; and two, educating them on what their Navy and Marine Corps does. 2011 promises to be a really exciting year, with the launch of the new website, as well as the celebration of the centennial of Naval Aviation.

The use of the new website, we really look forward to reaching even more people, and to help inspire and educate them on the Navy and Marine Corps, and also to pay homage to the legacy and tradition of those who came before us in the past 100 years of naval aviation.

Thank you. And with that, we’re going to have Mike come out, and he’s going to give you all a demonstration of the new website.

SCOTT GUTHRIE: Awesome.

MIKE DOWNEY: Hi, everyone. So, Microsoft partnered with the U.S. Navy Blue Angels and an award-winning interactive agency EffectiveUI to create a whole new experience for the Blue Angels. So, we’re leveraging some of the latest and greatest technologies, including Silverlight, HTML5, ASP.net and IIS Smooth Streaming to create this whole new experience. Let’s go ahead and take a look at the sites.

So, our first challenge was to create a home page experience that really engaged the user, and pulled them in and really set the tone for rest of the site. So, we’ve incorporated some HTML5 features, like the video tag, the audio tag and canvas to create an animated build-in, a user interface that overlays a high=quality video that’s being used as a background.

So, let’s take a little bit deeper dive into the site. The first section that we took a look at was the team section. Our goal for this was to really connect the visitor, the audience, with the individuals who make up the Blue Angels. So, in this section you can learn more about the officers. It’s a very interactive interface. You can read their bios, you can learn about the enlisted teams, and you can read all about what it’s like to be a Blue Angel.

Of course, the team member is the most important of the organization. But, the thing that gets people really excited is the shiny blue jet. I’m going to jump over to the aircraft section, and we’re going to take a look at the F/A-18 Hornet, the blue jet. Here, we’ve created  we’ve taken that really resolution 3-D model. and we’ve created an interactive experience that lets visitors and potential recruits learn more about the aircraft.

You can actually take it and rotate it around. You can also click on interactive elements that show you, in this case, the different modifications that the team makes to the jet that they inherit from the fleet. So, you can click on those, and it will tell us more about that stuff.

Now, we’ve also incorporated some animated sequences that show off the different configurations of the aircraft. So, here, we can see the landing gear being retracted into the fuselage of the plane. So, that’s the aircraft section. Now, the biggest challenge that we had as we were working on this project together was to try and recreate the thrill and excitement of a Blue Angels flight demonstration.

So, to do that the team collected three different camera angles for each of the maneuvers that the team flies and created a composite single video that includes a 3-D visualization of the maneuver, so the visitor can be oriented as to where the aircraft are throughout that maneuver.

Now, using this video brush in Silverlight, we were able to make this video experience interactive, so the user can actually click through the different camera angles and really be in control of the experience. So, all these videos are delivered using IIS Smooth Streaming, so that the Navy can ensure that the user is getting the best possible quality video delivery based on their connection speed.

Now, one of the other features we incorporated is based on the requirement for accessibility. So, we have closed-captioning support built into the video player as well. And in this example, you can actually read a transcript of the radio communications of the pilots as they fly the demonstration.

Now, Silverlight also enables us to incorporate multiple audio tracks with each video. So, I’m going to turn on the narration track, and we’re going to go down and take a look at one of the most difficult maneuvers that Blue Angels fly, the Double Farvel.

So, this maneuver is being flown about 50 feet off the ground, and it’s pretty crazy. I mean, this is cool stuff. So, that’s part of the inside the demo experience. The Navy is in the process of deploying the site today, later on today, so please go take a look at BlueAngels.Navy.mil.

And one last thing, if you’d like to check out one of the shows that Blue Angels put on around the country, we’ve incorporated an interactive map under the show information tab using the Bing map API. So, you can go in and click on any one of these, and it will pull up the map, and show you all the locations so you can find one near your hometown.

All right. Thanks, everyone.

(Applause.)

SCOTT GUTHRIE: Thanks, Mike.

So, as we talked about in December, the Silverlight 5 release brings with it a ton of new features and capabilities, things like better text quality, and richer text editing, vector printing support, a new 3-D API, XML debugging, and much, much more.

And what I would like to do to show off an application that kind of highlights a bunch of these new features and capabilities is invite Jon Papa onstage to walk you through a great application that’s been built with it.

JON PAPA: Hey, everybody. My name is Jon Papa, and as Scott mentioned, I’m going to show you some of the new features in Silverlight 5 today. And to start off here, we’re going to talk about building a house. So, four features in Silverlight 5 I’m going to show today. There’s lots of great new stuff, but what we’re going to show is building a house. And the first feature, obviously, is going to be Silverlight with 3-D.

So, in this particular case, I’m going to build a house, and I want to choose some different homes that I’ve got here. I’ve got different camera angles, they’re rotating around. I have different positions. And I kind of like brick homes, so this looks pretty good, I’ll choose this one.

As I flip over, now, I can see I’ve got full interaction with the 3-D. I can zoom around the house. I can check out the roof, the floor; that’s the floor, there’s the roof. Zoom in, look inside the house. Look back down, and you can see I’m getting really great performance, things are rocking along pretty good. And the great thing about Silverlight 3-D is you get the familiar and powerful XNA APIs that many of you already know, so you can really take advantage of that and build some great stuff. Another great thing here is we get something with doing GPU acceleration, so we get some great performance on here. So, you can rock along at 60 frames per second by offloading rendering to model with 3-D with Silverlight and get this great performance, and you still have plenty of computing power to spare with the GPU.

So, this is a cool house. But, Id’ like something a little snazzier. So, let’s go over and look at something with a few more angles and curves. That looks pretty cool. So, on this model, I’m going to decorate my house. Notice there’s no doors; there’s no windows. I’m going to show off another feature in Silverlight 3-D we call Projection. And it allows you to project from 2-D space to 3-D space, and back again.

You’ll notice this here, as I grab a door and the control is down here, you’ll notice the door is now on the floor, it moves up to the wall, and it actually adheres to the 3-D space. What’s happening with this Projection is, in Silverlight, I’m snapping coordinates from the map, the X/Y 2-D coordinates over to the 3-D space but respect the model. It basically means the door will follow along the house. It’s pretty easy, pretty simple to do. You can also see this was a tooltip. So, as I zoom in on the house here, and you see as I move up and down, the tooltip moves around with the house. That’s also using Projection with 3-D. Some very, very cool features that we’ve got.

Let’s finish building my house up. I need to have at least one window in my house where we can see what’s going on. I’ll build some shutters up here. I’ll take some other elements. All of this is interacting, the 2-D and the 3-D worlds, and let’s put a light outside of the house. It could be like a little security conscious.

So, this brings me to the second feature in Silverlight 5 that I want to talk about — it’s called binding in style setters, which gives you the ability to change different property values in one place, bind it to an object and have them update throughout the entire site through styles.

So, how does this work? Simply put, I can change and choose the color for my doors, and my shutters down here with the swatch, and with a single click it’s binding to all the styles in the entire experience to load up on the screen, and we can choose some really interesting colors if we’d like to, like yellow. And that’s great because it gives us all that binding experience.

So, now that we’ve got that going, I’m moving to my third feature I want to talk about with Silverlight 5. And this is implicit data templates. This is really cool for really making your development process much shorter and without running as much code. So, over here on my construction list, you can see the prices that I have in the house. It’s a little pricey. So, let’s fix that first. Let’s make a couple nice big discounts. That’s not big enough. Yes, I like that better. So, I’m a good real estate agent, too.

So, we’ve got this great data template over here, we see all of my items. And this is going to do for me the implicit data templates are going to take all the data types that are bound in this list, and it’s going to change how the appearance looks throughout the application. So, what does that mean? It means that when I have doors, I can show different colors, different icons, completely different controls and elements. The door can show me energy-efficient, or sealed; I can see down here for lamps, they’re not doors. They’re all on the same list, but adheres and uses in different places, and for the data template to show different data values in this particular case. All that really can shorten your development time.

So, this brings me to my fourth feature, and one of my favorites in Silverlight 5. This one is data-binding debugging, which comes in really handy, since I’ve got a bug in my experience up here. You’ll notice the door has no price. I’m pretty sure it’s not free. So, it probably has a binding problem.

Let me flip over to Visual Studio, and we can take advantage of the new feature for data-binding debugging. You can see here, we’ve got the “total” is my binding operation. And I’m looking for that value on my object. So, it looks like it’s OK to me. So, let me set a breakpoint over here, and once I set my breakpoint I don’t even have to stop. I can go back. We’ll rerender the page, and there we see we’ve got a breakpoint right inside Visual Studio. Very, very cool. (Applause.)

But, better yet, we can look down here and see what kind of errors we’ve got. Let’s zoom in, here we go. Down here, we see that the total property is not found on “door/house.” So, I was wrong. There’s something going on here. Let’s figure out what’s going on. We’ve got some great information down in the locals window. We can see the entire data binding pipeline and look at all the information, even more detailed information about the errors.

But what I want to look at is what’s going on with something called the “final source.” The final source shows me what am I bound to. It’s a door. OK. That makes sense, it was a door that wasn’t showing the price. But I don’t see any property in there called “the price” or “the total.” I do see one called “total” though. So, it shows me in here it’s got a value of 200 — that’s what I should have typed earlier. It makes my life a lot easier. In like five seconds, I can figure out what’s going on without even stopping my application. So now, I can remove my breakpoint. I can even set a conditional breakpoint if I want to as well. And just continue running along.

Now, those are great core features, but once I build my house, I now want to do a little bit of a tour of it as well. Let’s go ahead and take a look at the inside of my home, and check it out. I wish my home looked like this. So, I’m zooming around with some great performance for 3-D again, changing the camera angles. Notice the light values, the rotation, and let’s check out the living room. And you’ll notice I’ve got a really cool picture of Scott up here. Doesn’t everybody wish they had a portrait of Scott in their house? Really walk around, I’m getting awesome performance running on the GPU. I can even go up the stairs, change the camera angles and fly around the home.

So, in summary, what I want you all to take away is some great procedures in Silverlight 5. We talked about the 3-D, which has the awesome performance running on the GPU. It gives us a lot of power. So, the CPU will be fully left over. You’ve got 2-D to 3-D projection with that. You’ve got the XNA APIs. You can start coding with this. And we’ve also got defining style setters, and we’ve got people’s base templates, which really makes development life much easier. And one of my favorites, again, is binding debugging, which you get to really shorten development lifecycles.

And I’m really happy to say that this full experience, the House Builder Experience, we’re going to release that later this month, the full source code and the full application up on our site, so you guys can see a great example of how to use Silverlight 5 to build great experiences.

Thank you all very much.

(Applause.)

SCOTT GUTHRIE: We’re excited to announce that the Silverlight 5 beta, along with the Visual Studio and Expression Studio support to the design tools and design applications with it, is now available for download.

(Cheers and applause.)

We’ll ship the RTM release of Silverlight 5 later this year along with the RTM release of Windows Phone “Mango.” We’re really looking forward to seeing some of the amazing experiences that you’re going to build with them, and we’re really looking forward this week to much great talk and discussion in the hallway about the technology.

So, we’ve seen a lot of great examples of users’ experiences today. What I’d like to do now is actually hand it over to Jeff Sandquist, who is going to show you how you can take your user experiences even further, using Kinect. Here’s Jeff.

JEFF SANDQUIST: Kinect is exciting. It is the fastest-selling electronic device ever. Let me repeat that, the fastest-selling electronic device ever. (Cheers and applause.) Over 10 million devices sold. And you know, while those of us were dancing up a storm, playing a little volleyball or rafting down a river, all from the comfort of our living rooms, there were a whole host of others, innovators, hackers, enthusiasts, that were doing things with Kinect on Windows that none of us could ever have imagined. And that is why we will be releasing the Kinect for Windows SDK.

(Cheers and applause.)

Thank you.

The Kinect for Windows SDK will give you full access to the microphone array, skeletal tracking, and you’ll be able to write your apps in VB, C# and C++. It will be available later this spring, and at first with a noncommercial license. But, we will be having commercial terms following.

I think the best way, though, to really show what this is all about, is write some code, show us some demos and have a bunch of fun. I’m going to have Dan Fernandez, who is on my team, come on up, and we’re going to build our first Kinect app together.

Come on up, Dan. Let’s do this.

DAN FERNANDEZ: So, with Kinect we’re going to build our first Hello World application. For those of you who have never programmed a Kinect, it has a video camera, as well as the ability to get depth information, meaning I can see how close or far something is. So, let’s just show how you can build an application.

So, I have a prebuilt main window here. And I’m going to drag and drop an image. And I’m going to put this in the left corner. So, what we’ll do is just size this first image to 640, click enter, and let’s change the height to 480, so a 640 by 480 image. Let’s drag one more image on here, and we’ll just keep the size for that, and we’ll add a button and I’ll move that there. That looks gorgeous. And I’m going to change the content here, so we can just make it real clear that that is a start button.

OK. So, let’s jump into the code behind for our application. So, one thing we should mention, we have a reference to the Kinect Net Library, which is a wrapper that allows us to communicate back and forth with the driver on this PC. So, the first thing we’re going to do — and by the way some of these APIs properties and methods may change, but at least it will let you understand conceptually how you can program Kinect.

So, we’ll build  let me full screen this. So, you can see this all, Kinect Sensor = new Kinect. OK. So, what I’m going to do is to subscribe to some of the events that the sensor has, and I’ll start with Depth Frame Ready, and hit tab, and just have Visual Studio build out that event for me. And I’ll subscribe also to the Video Frame Ready.

So, what will happen is those events will fire when I’m getting a new frame, new information from the sensor. For the first video frame we’re going to set that 640 by 480 image. So, I’m just going to say Image1 DepthSource = e.Image. Let’s do the same thing with the depth frame. This time Image2 DepthSource = e.Image. So, the last thing we have to do is actually tell the sensor to start. So, I’m just going to say, sensor, when we click on our button, we’ll start this. And I’m going to put it to full screen.

JEFF SANDQUIST: I think you need like a demoer or something up here.

DAN FERNANDEZ: Do you want to be my Vanna White?

JEFF SANDQUIST: Will you be my Pat Sajak?

DAN FERNANDEZ: That would be swell.

JEFF SANDQUIST: Thank you.

DAN FERNANDEZ: OK. So, I’m going to run, and with any luck we’re going to press start. There we go.

JEFF SANDQUIST: Yes, it’s gorgeous. What I’m going to do here, we can do a little bit of this, a little bit of that.

DAN FERNANDEZ: That’s a little bit better than Scott’s thing up there, but we’ll leave that. Yes, it’s going to take a long time to burn that out of my memory. So, video frame on the left, depth frame, and as you see, the colors change depending on how near or far the Jeff is.

JEFF SANDQUIST: So, pretty basically, we just did that with a couple of lines of code. What I’m going to do is now get a little crazier, and we’re going to build a very basic painting application.

DAN FERNANDEZ: So, what we’re going to do is in our Depth Frame Ready, and I have a prebuilt class here. Let’s just jump to full screen to make this really easy, and I have a Point p = and this is a prebuilt class, and I’ll talk about what this does in just a second. But, what we get when this event fires is an array that gives you the depth values. Which means at a certain pixel this pixel is at this specific distance. So, I’m just going to pass this in, and I’m going to say, distance.near. So, what this does is returns the average point for all the near points, and you’ll see what we’ll be able to do with that in a second. Now, I’m going to cheat here and just have a code snippet for my draw line.

JEFF SANDQUIST: Don’t you wish in Visual Studio, everything you could just do as a code snippet, and it just spits out this massive thing?

DAN FERNANDEZ: Great. So, draw line, I have a point, and I have a last point. So, if the value is negative one, I’m going to ignore it. If it’s zero, that means it’s the first one. And I’m going to set last point to that point. Otherwise what I’m going to do is, every time that event fires, I’m going to draw a line from the last point to the new point.

So, we’re going to be drawing a line here, and here are just some of the properties of my line. It’s 20, it’s orange, and the style is round.

The last thing we do is just add it to our Window. So, let’s just jump up here and once we have the average point that’s near, we’ll just call it draw line, and pass in our new point. So, Jeff, I’m going to ask you to Vanna White this again.

JEFF SANDQUIST: Here we go.

DAN FERNANDEZ: And we’ll press start and kind of move a little.

JEFF SANDQUIST: There we go, let’s do a little bit of this, add a little bit of that, and a little bit of that.

DAN FERNANDEZ: That is gorgeous. Here, I’ll give you a new clean surface to play with.

JEFF SANDQUIST: There we go. I could do a nine or something like that. It’s one of my favorite numbers.

DAN FERNANDEZ: That’s awesome. So, what this is doing is drawing a line based on only the closed points, as his hand hits that it actually draws in real time.

JEFF SANDQUIST: That is really cool. What do you guys think? (Applause.)

So, it’s not bad for three or four minutes of a little bit of coding. Visual Studio, build your first Kinect app with Windows. I wonder what would happen if maybe we just had a few more minutes to go build something — what could we make? There we go. OK, you had more than a few minutes.

(Cheers and applause.)

DAN FERNANDEZ: So, what would you say this is?

CLINT RUTKAS: I would say this is a Kinect drivable lounge chair.

DAN FERNANDEZ: There are so many times I’ve been in a reclining chair and needed to go to the kitchen, and I’m like, there’s no way I’m going to make it. So, really this is fabulous.

So, now you have special wheels on here.

CLINT RUTKAS: I do. So, we have omnidirectional wheels, so that gives you cool stuff like this, I can parallel park.

DAN FERNANDEZ: That is going to be awesome to avoid the dog, that’s great.

So, let’s go ahead and power it down, and power it off right now, so we don’t dive you off stage yet.

So, show me how to drive. What’s going on with driving here?

CLINT RUTKAS: OK. What we have onscreen here is we have basically two thumb sticks here, and when I go further up it gives  you can see the green power meters to all the wheels. If I go down, you can see it reducing power to all the wheels.

So, it’s very much like paint driving. What we do here is we send a value between one and negative one to our motors underneath, via the Robotech speed controller.

DAN FERNANDEZ: OK, cool. And the red and green we have in that corner, the bottom corner there, what’s going on there?

CLINT RUTKAS: That’s just giving us the point on the screen where the sensor  I’m sorry, where your hands are.

DAN FERNANDEZ: Sure, OK. Now, you added a very cool  your favorite feature here. You’ve got to show it off.

CLINT RUTKAS: My favorite feature is, because it is a lounge chair, automatic reclining via a Kinect gesture. (Applause.)

DAN FERNANDEZ: All we need to do is, I think, add a cup holder and we’ll sell dozens of these.

So, Clint, for people who want to build this themselves, what’s  

CLINT RUTKAS: All they have to do is once the Kinect SDK gets released, we’ll be releasing the source code instructions and where to buy all the parts on Channel 9’s Coding For Fun section.

DAN FERNANDEZ: So, on Coding For Fun you’ll be able to download the app and if you have a lot of spare money you can build this yourself.

So, I love it, how about a big hand for Clint?

(Cheers and applause.)

JEFF SANDQUIST: We have a little bit of fun with our team.

Microsoft WorldWide Telescope is this amazing application for Windows. It allows you to explore the whole solar system, the moons, the stars, and it’s just an amazing experience.

Now, I’m going to have Jonathan Fay come up. He works in Microsoft Research. And what he’s done with WorldWide Telescope is, he’s enabled it with Kinect. And now, with this, you can have your whole universe at your fingertips.

Hi, Jonathan.

(Applause.)

JONATHAN FAY: Thank you, sir.

Microsoft Research, we’ve been working with agencies like NASA and the European Space Agency, as well as universities worldwide to put together astronomy information, and planetary data from all across the universe. And when we saw Kinect, we knew that we had to bring the universe to your fingertips with this great tool. Right now, we’re looking at the Earth, and low Earth-orbit satellites. And the green ones are geo-synchronous satellites, and we can come out here, and zoom out, away from Earth, and start looking at the whole solar system here, and basically move around and see all the planets.

Now, let’s go ahead and select another planet that we’ll go to. We’ll stroll over here to Saturn, and we’ll go ahead and fly into Saturn. And as we’re coming into Saturn, you’ll see something that looks like a ball of yarn that a cat has been playing too much with. And this is asteroids that have been captured to be new moons of Saturn. But, as we zoom in to Saturn even more, we can see the ring system here, and then you can see all the other moons that formed with Saturn’s rings.

But the universe is much larger than our solar system. As our solar system moves off into just a single pixel, the rest of the stars that are around us are actually going to start moving from the constellation positions that you’re familiar with, and they’re starting to move around in 3D. And now we’re looking at that band of stars that was the Milky Way in our night sky is actually our home galaxy. And so, we’re now looking from the outside of our galaxy. But our galaxy is just one of millions.

And scientists have been studying with the Sloan Digital Sky Survey trying to map the entire universe, so that we can really see the large-scale structure of how our universe forms with the Big Bang. So, I can come in here and explore, and look at the individual galaxies, and the detail structure in this.

There’s the Coma Cluster. The WorldWide Telescope enables you to not just go through space, but also travel through time. And so we’re going to go to an event that’s going to happen on August 21st of 2017. And, as we fly through the Milky Way, and back to our stars, and come back to our home solar system, and look at Earth here, you’re going to notice that we now have time moving forward in the future, and you can see it. And as the satellites are orbiting, something interesting is happening, the sun is moving across the United States with its shadow cast by the moon, and it’s going to be the greatest eclipse of our lifetime. Within a day’s drive of almost everybody in the United States, you’ll be able to see this wonderful eclipse. I can’t wait to see it myself.

But, you know, you can preview this at WorldWideTelescope.org, and you can also come to our booth.

Thank you very much.

(Cheers and applause.)

JEFF SANDQUIST: Thank you, sir. Thank you very much.

How awesome was that? (Cheers.) The universe at your fingertips, it’s just unbelievable. Make sure you go to the Kinect Lounge. You’ll be able to play with the WorldWide Telescope. We’re going to have some of our other projects. But we’re not done yet. You know, I think it’s really important that we show the things that we at Microsoft are building with Kinect on Windows. But, we wanted to bring out some of the people from the community that have been doing some amazing things with Kinect.

Laura Foy of my team is going to come out, and along with her is Michael Zollner, and Stephan Huber. They’re with the University of Konstanz, and they’re going to show us one of the first of the community projects that has been using Kinect on Windows.

Come on out guys.

(Applause.)

LAURA FOY: Thanks, Jeff. Thanks everybody. This next demo is going to amaze and inspire you. Now, these two boys, armed with only a Kinect, some creativity, and a whole lot of brain power created Navi, Navigation for People with Visual Impairment.

And, Michael, why don’t you explain to the audience what that is.

MICHAEL ZOLLNER: Well, Navi is a project that enables visually impaired people to safely travel in indoor environments.

LAURA FOY: Awesome. And now, let’s take a look at what this man is wearing. It’s a lot going on, so we’re going to break it down. Let’s start at the top, what is it?

MICHAEL ZOLLNER: Well, we have the Kinect helmet, and we decided to detect obstacles in the immediate surroundings of this person. And, therefore, we decided to change the default perspective of Kinect, and mount it onto a helmet.

LAURA FOY: So the Kinect is actually acting as the eyes for the visually impaired person.

MICHAEL ZOLLNER: Correct. Right.

LAURA FOY: All right. Moving on down, he’s got a cummerbund of sorts here. What’s this fashion statement all about?

MICHAEL ZOLLNER: Well, this is a vibrotech (ph) waist belt. It has three built-in pairs of vibration motors. And this is basically the feedback mechanism of the obstacle detection. So, whenever there’s an obstacle in one of these directions, left, center, or right, the vibration motors will vibrate.

LAURA FOY: So, if someone jumps in front of him, he gets zapped.

MICHAEL ZOLLNER: Correct, yes.

LAURA FOY: Let’s turn him around. We’ve got this custom-built laptop backpack thing, is this the brains of the operation? What’s going on back here?

MICHAEL ZOLLNER: Well, this backpack is basically for debugging, and visualizing the state of the system, because he is the only one who feels the vibration. And here you see at the right side, that image of the Kinect, and we did an overlay of our obstacle detection mechanism, and at the moment I am detected as an obstacle. And as I’m moving to the right, you can see that I’m tracked by the Kinect. And as I’m moving closer to the camera, more danger to him, or the feedback gets more intense.

LAURA FOY: So the vibrations are actually stronger on his belt device?

MICHAEL ZOLLNER: Yes.

LAURA FOY: Awesome. All right. Now, we’ve set up these markers, just sort of explain to the audience what’s going to happen?

MICHAEL ZOLLNER: Well, these markers are for point-to-point navigation and building. So, these markers have instructions encoded, which tell the person where to go, and it’s done by speech synthesis, by audio speech synthesis.

LAURA FOY: So, we’ll actually hear someone talking to him?

MICHAEL ZOLLNER: Correct.

LAURA FOY: Giving him direction. All right. Great. Let’s see it in action.

Nice, so the Kinect read the marker, and knew exactly what to do.

MICHAEL ZOLLNER: You see that the instructions are based on distance to the markers.

LAURA FOY: Absolutely. This is outstanding. I can’t wait to see what you guys do next. Great work, guys, great work.

(Applause.)

Unbelievable. All right. Well, it wouldn’t be a Kinect demo without a little bit of fun, a little bit of games. So, I’m going to bring out Jared Ficklin from Frog Design. And we’re going to demo Wall Panic 3000.

All right, Jared. Thanks for coming on. Now, Frog Design isn’t really a videogame company, so why build something like this?

JARED FICKLIN: No, we’re more of like a bring design and innovation to life company, but we really believe in playful research and development.

LAURA FOY: Me, too.

JARED FICKLIN: So, this was a great opportunity to take technology that we use with our clients, and just have some fun with it.

LAURA FOY: Absolutely. And there is a lot of fun. What would you say the inspiration was?

JARED FICKLIN: Well, at its heart, it’s 8-bit gaming meets the Kinect. We needed something that was also really easy to walk up and play, and it needed to be as fun to watch as it was to actually play.

But, I think, actually, our team in New York who created this thing, they might just watch a little too much Japanese TV.

LAURA FOY: Well, we’ll let the audience decide if it’s just as much fun to watch, because it is a two-player game. So, to play with me, I’m going to bring out actually someone who is smarter than me, someone who is better looking than me, and as always, he wasn’t available, let’s welcome Scott Guthrie.

All right, Scott. Are you ready?

SCOTT GUTHRIE: I’m ready.

LAURA FOY: Have you been stretching?

JARED FICKLIN: All right. You’re going to be right on here, and the game is going to come up, count down, and you’ll see a blobbly version of yourself. Match the blobbly version of yourself with the hole in the wall.

Are you ready? We’ll start with an easy one. Let’s go.

LAURA FOY: All right. We’re a team. Let’s do it.

JARED FICKLIN: We left off the C and the A, but if you can do the Y and the M. I’m glad I set it on easy mode.

I should probably let you know now that as much as we want this to be about computer vision, it’s actually just designed to get in awkward positions.

(Cross talk.)

JARED FICKLIN: Very good. All right now, at this point, this one is going to be a little more fun. You’ve got to feather your hair out. Scott, are you Farrah?

LAURA FOY: Scott’s got it good.

(Cross talk.)

JARED FICKLIN: The last minute reversal might have saved you there. Now, with your beliefs in the laws of physics 

SCOTT GUTHRIE: OK.

LAURA FOY: Yes, I’m not 

JARED FICKLIN: You might have to suspend them for this. It really is possible.

LAURA FOY: It’s close. Oh, good. Burn those pictures.

JARED FICKLIN: Oh, no, those are going on Twitter.

LAURA FOY: All right, Scott. We’re going to leave you alone. Jared and I have to work on the maneuver.

SCOTT GUTHRIE: So, this is going to show up on YouTube, isn’t it.

You’ve seen some pretty cool experiences today, and we want be able to have you start building these experiences as well. As you heard earlier this week, we’re going to be making available the Kinect SDK, so that you can download it, and you can build these types of experiences yourself on your PC. And to help make sure you’ve got everything you need, we’re also pleased to announce that every attendee is going to get a Kinect.

(Whistles, cheers and applause.)

You’ll be able to pick them up either today or tomorrow. The locations are there. There will probably be a long line. There’s plenty for everyone. Don’t worry. Take your time to pick them up. And we’re looking forward to seeing throughout all the conference the great things that we’ve shown, we’re looking to see how you take it to the next level, and we hope you’ve been inspired. We’re looking forward to seeing what you create with it. I hope you have a great week. Thank you for coming to MIX.

Thanks. (Applause.)

END