Scott Guthrie: Build 2014

ANNOUNCER: Ladies and gentlemen, please welcome Executive Vice President, Cloud and Enterprise, Scott Guthrie. (Cheers, applause.)

SCOTT GUTHRIE: Good morning, everyone, and welcome to day two of Build.

We now live in a mobile-first, cloud-first world. Yesterday, we talked about some of the great innovations we’re doing to enable you to build awesome client and devices experiences. Today, I’m going continue that conversation and talk about how you can power those experiences using the cloud.

Azure is Microsoft’s cloud platform and enables you to move faster and do more. A little over 18 months ago here in San Francisco, we talked about our new strategy with Azure and our new approach, a strategy that enables me to use both infrastructure as a service and platform as a service capabilities together, a strategy that enables developers to use the best of the Windows ecosystem and the best of the Linux ecosystem together, and one that delivers unparalleled developer productivity and enables you to build great applications and services that work with every device. Since then, we’ve been hard at work fulfilling that promise.

Last year was a major year for Azure. We shipped more than 300 significant new features and releases. 2014 is going to be even bigger. In fact, this morning during the keynote, we had more than 44 new announcements and services that we’re going to be launching. It’s going to be a busy morning.

Beyond just features, though, we’ve also been hard at work expanding the footprint of Azure around the world. The green circles you see on the slide here represent Azure regions, which are clusters of datacenters close together, and where you can go ahead and run your application code.

Just last week, we opened two new regions, one in Shanghai and one in Beijing. Today, we’re the only global, major cloud provider that operates in mainland China. And by the end of the year, we’ll have more than 16 public regions available around the world, enabling you to run your applications closer to your customers than ever before.

As we’ve seen our features and footprint expand, we’ve seen our adoption of Azure dramatically grow. More than 57 percent of the Fortune 500 companies are now deployed on Azure. Customers run more than 250,000 public-facing websites on Azure, and we now host more than 1 million SQL databases on Azure.

More than 20 trillion objects are now stored in the Azure storage system. We have more than 300 million users, many of them — most of them, actually, enterprise users, registered with Azure Active Directory, and we process now more than 13 billion authentications per week.

We have now more than 1 million developers registered with our Visual Studio Online service, which is a new service we launched just last November.

Let’s go beyond the big numbers, though, and look at some of the great experiences that have recently launched and are using the full power of Azure and the cloud.

“Titanfall” was one of the most eagerly anticipated games of the year, and had a very successful launch a few weeks ago. “Titanfall” delivers an unparalleled multiplayer gaming experience, powered using Azure.

Let’s see a video of it in action, and hear what the developers who built it have to say.

(Video segment: “Titanfall”)

SCOTT GUTHRIE: (Applause.) One of the key bets the developers of “Titanfall” made was for all game sessions on the cloud. In fact, you can’t play the game without the cloud, and that bet really paid off.

As you heard in the video, it enables much, much richer gaming experiences. Much richer AI experiences. And the ability to tune and adapt the game as more users use it.

To give you a taste of the scale, “Titanfall” had more than 100,000 virtual machines deployed and running on Azure on launch day. Which is sort of an unparalleled size in terms of a game launch experience, and the reviews of the game have been absolutely phenomenal.

Another amazing experience that recently launched and was powered using Azure was the Sochi Olympics delivered by NBC Sports.

NBC used Azure to stream all of the games both live and on demand to both Web and mobile devices. This was the first large-scale live event that was delivered entirely in the cloud with all of the streaming and encoding happening using Azure.

Traditionally, with live encoding, you typically run in an on-premises environment because it’s so latency dependent. With the Sochi Olympics, Azure enabled NBC to not only live encode in the cloud, but also do it across multiple Azure regions to deliver high-availability redundancy.

More than 100 million people watched the online experience, and more than 2.1 million viewers alone watched it concurrently during the U.S. versus Canada men’s hockey match, a new world record for online HD streaming.

(Video segment: “Olympics”)

SCOTT GUTHRIE: I’m really excited to invite Rick Cordella, who is the senior vice president and general manager of NBC Sports Digital, on stage to talk with us a little bit about the experience and what it meant.

So the first question I had, can you tell us a little bit about what the Olympics means to NBC?

RICK CORDELLA: It’s huge. I mean, even looking at that video right there, I’m taken back to a month ago and how special it is, you know, what it means to the athletes. But what it means to NBC is big. It’s enormous for our company. Steve Burke, our CEO, calls it the heart and soul of the company. And if you consider how much content NBC, how many events NBC is connected to, that’s a pretty bold statement.

Six months out, we actually take our peacock icon and adorn it with the Olympic rings. So for every piece of content that appears on the NBC broadcast network, the Olympic rings are present. It’s big for our company.

SCOTT GUTHRIE: Can you talk a little bit about the elastic scale and how the cloud is kind of key to enabling it?

RICK CORDELLA: Sure. You mentioned that semifinal game between the U.S. and Canada that Friday afternoon. To be able to scale to that massive amount of volume is enormous. Setting records. You go from a curling match may have just one stream going on, to over 30 concurrent streams.

And then, oh, by the way, you have five EPL games happening at the same time, a PGA tour tournament that’s happening, and you really need that planning to go into place as we scale out across 2,000-plus events with the NBC sports group.

SCOTT GUTHRIE: Can you talk just a little bit in terms of — clearly, it’s a big deal for NBC. How critical is it to have an enterprise-grade platform deliver it?

RICK CORDELLA: The company bets about $1 billion on the Olympics each time it goes off. And we have 17 days to recoup that investment. Needless to say, there is no safety net when it comes to putting this content out there for America to enjoy. We need to make sure that content is out there, that it’s quality, that our advertisers and advertisements are being delivered to it. There really is no going back if something goes wrong.

SCOTT GUTHRIE: Cool, I’m glad it went well.

RICK CORDELLA: Yeah. No, I mean, Azure — honestly, I know I’m speaking here, but Azure really played a critical role in this happening. It’s not as if you can just pick a company out there that has a product that you don’t trust to pull off an event of this magnitude. These are the largest digital events that any company pulls off. And we’re really happy that we worked closely with Microsoft Azure this time around.

SCOTT GUTHRIE: Great. Thanks, Rick.

RICK CORDELLA: Thanks, Scott. (Applause.)

SCOTT GUTHRIE: So we’ve talked at a high level about what you could do with Azure. Let’s now dive into specifics.

One of the things that makes Azure unique is its rich set of infrastructure as a service and platform as a service capabilities and how it enables developers to leverage these features together to build great applications that can support any device.

Let’s go ahead and look at some of the great new enhancements we’re releasing this week in each of these different categories.

First up, let’s look at some of the improvements we’re making with our infrastructure features and some of the great things we’re enabling with virtual machines.

Azure enables you to run both Windows and Linux virtual machines in the cloud. You can run them as stand-alone servers, or join them together to a virtual network, including one that you can optionally bridge to an on-premises networking environment.

This week, we’re making it even easier for developers to create and manage virtual machines in Visual Studio without having to leave the VS IDE: You can now create, destroy, manage and debug any number of VMs in the cloud. (Applause.)

Prior to today, it was possible to create reusable VM image templates, but you had to write scripts and manually attach things like storage drives to them. Today, we’re releasing support that makes it super-easy to capture images that can contain any number of storage drives. Once you have this image, you can then very easily take it and create any number of VM instances from it, really fast, and really easy. (Applause.)

Starting today, you can also now easily configure VM images using popular frameworks like Puppet, Chef, and our own PowerShell and VSD tools. These tools enable you to avoid having to create and manage lots of separate VM images. Instead, you can define common settings and functionality using modules that can cut across every type of VM you use.

You can also create modules that define role-specific behavior, and all these modules can be checked into source control and they can also then be deployed to a Puppet Master or Chef server.

And one of the things we’re doing this week is making it incredibly easy within Azure to basically spin up a server farm and be able to automatically deploy, provision and manage all of these machines using these popular tools.

What I want to do here is invite Mark Russinovich on stage to actually show off how you can use all this functionality and some of the cool things you can now do with it. Here’s Mark. (Applause.)

MARK RUSSINOVICH: I thought we were going to wear black today, Scott.

SCOTT GUTHRIE: Yeah. (Laughter.)

MARK RUSSINOVICH: Oh, well. Morning, everybody.

So let’s get started. I’m going to show you how easy it is to create a virtual machine from inside of Visual Studio here by going to the Server Explorer, going down to virtual machines, right clicking, and then you can see a new menu item there, create virtual machine.

Clicking it launches a wizard experience that looks a lot like the portals wizard experience, but I’m doing it right from here inside of Visual Studio. First step, take a subscription to deploy into. Second step, take an operating system image. I’m, of course, going to pick the best one on this list, the latest version of 2012 R2.

Then I pick a virtual machine name. So I’ll give it a nice, unique name here. Provision a user account to log into the machine if I need to. Either create a new cloud service, or deploy into an existing cloud service. I’ll go ahead and pick an existing one. And then pick a storage account, into which the operating system disk gets created. I’m going to, again, pick an existing storage account.

Press next. And the final step would be to configure any network ports that I want to open up on the machine. But I’m good with the default, so I’ll just press “create” and let it launch. In a few minutes, we’ll have a virtual machine ready to go.

But that wouldn’t be that cool if that’s all you could do from Visual Studio is just create and delete virtual machines.

What’s even better is that you can also debug your virtual machines right from inside of Visual Studio from your desktop.

And to demonstrate that, I’ve got a Web service rich client application here. It’s an expense submission application. You can see I’ve loaded up the client and the service into Visual Studio.

I’m going to launch the client here, just so you can see what it looks like. And it’s already prepopulated here with some expenses from my day yesterday here in San Francisco.

Now, you can immediately see that something’s wrong. And that is that when I go to a Mexican restaurant for lunch, the margaritas that I drank come out to way more than $12. So I’ll fix that right there. (Laughter.)

And now let me switch to the virtual machine that the service is running. And you can see right here, it’s ready for an expense submission.

And I’m going to switch back. And let’s just presume that I’ve got a bug inside of the submit expense method up in that service. And here we can see, submit for approval. I’m going to set a break point right there at the entry point.

And now my next step is to connect Visual Studio up to that machine in the cloud so I can interactively debug it.

The first thing I need to do is to enable the bugging in that virtual machine. And you can do that inside of Visual Studio by clicking on a virtual machine and selecting the “enable debugging” menu item.

What this does is takes advantage of the Azure agent that sits inside of the virtual machine that I created to dynamically inject the Visual Studio debugging client. And once it’s injected, then I can go ahead and use Visual Studio to connect to it and debug the code that’s running inside that virtual machine.

That machine that’s running that expense service already has that debugger agent injected into it. So all I have to do is right click, say attach debugger, and now I’m going to retrieve a list of processes running in that virtual machine, the one I’m interested in, the one running the service, which is right here. Expense IT service.

I press “attach,” and at this point you can see Visual Studio is ready to hit that break point. So when I go back to the rich client and click “submit” you see I just hit the break point live, and now I can debug as if this thing was on my local desktop. So no more installing Visual Studio in the server. (Cheers, applause.)

So the next thing that Scott talked about was the power of creating VM images from your running VMs that consist of complex setups with multiple disks. I’ve got a virtual machine up here called a RigVMBuild B. And you can see that it’s got an OS disk and a whole bunch of data disks attached to it.

If I wanted to create multiple versions of this with those copies of the data on that disk, of course I could scrape together PowerShell scripts to do it. But with the new commandlets we’ve got and the new REST APIs we’ve got in Azure, I can do that very easily.

And so here’s a PowerShell command that invokes the save Azure VM image commandlet. I’m going to go ahead and click that. And what that’s going to do is launch off a capture of that VM into a VM image that I can then deploy from.

I’ve already actually created a VM image from that machine and I’m going to reference it right here. You can see this is a new Azure VM with a config here that specifies an image name, and that image name is the image name of a previous capture.

What this is doing is provisioning a new copy from that new VM instance from that VM image capture that I made previously. And as Scott had mentioned, one of the cool scenarios for this is if I’ve got a test environment where I want to test multiple different copies, maybe throw a different test at each one in parallel. I can go stamp out multiple instances of that same VM image.

Another way to use this, though, is kind of as a snapshot restore capability. So if I’m debugging and I want to go back to a previous point in time, I can capture an image at a good state, go do some work in the VM, then delete that VM and create a new instance back from that previous state. So two really cool scenarios enabled with this just with those simple commands.

I’ll go back to the portal now to see if we can see that VM coming up, spinning up. And there it is, a RigDBVM. Hold on, let me refresh. And we should see that VM show up here, and there it is, it’s retrieving status, there we go, starting. This is the VM that I just provisioned.

If I go take a look at its details, you can see that it’s got all those disks just like that original one did, just with the simple command. (Applause.)

The final thing I want to talk about is the integration with configuration management systems. Specifically, in this case, Puppet. With the collaboration with Puppet Labs, we’ve made it very easy to go and create Puppet Masters from within Windows Azure by adding a Puppet Master image to our platform image repository.

So you can see down here, we’ve got a Puppet Labs section. And if I click that, I’ll be able to launch a Puppet enterprise Puppet Master server inside of Windows Azure.

But we’ve also made it easy to create Puppet agents, machines running the Puppet agent that connect to a Puppet Master. And that’s what I’m going to do now is switch over to here and create a Windows-based virtual machine.

Give it a name, type in my name and password. Press next. And at this point, I’m going to see if the defaults work, because nobody’s taken that name. And the final step is to install the VM agent. If we’ve got the VM agent installed, we can use that same agent technology to inject other code into that VM.

And the one that I’m going to inject for this demo is Puppet. You can see we’ve also partnered with Chef to get Chef agent support in there. And now at this point, all I have to do is tell it where the Puppet Master is. And I’ll just type in an example Puppet Master name. And at that point, when I go and provision that virtual machine, that Puppet agent is going to launch and connect to that Puppet Master and I’ll be able to manage it from there and deploy code into it.

To actually show you deploying code into virtual machines on Azure from a Puppet Master, I’m going to invite up Luke Kanies, the CEO of Puppet Labs, to take us further into the demo. Luke? (Applause.)

LUKE KANIES: So we at Puppet Labs exist to help you automate the configuration and ongoing management of your — as we like to think of them — stupid computers. And the goal is to allow you to do a lot less firefighting and a lot less script running and a lot less maintenance of things like golden images and things like that and a lot more time getting your software in front of your users more often, more quickly, and a lot less hassle.

One of the great things about Puppet is that it works on physical machines, virtual machines, it works in the public cloud, it works on private cloud, and really any combination of that. It works on pretty much any operating system you want to manage, anything you’re not embarrassed to admit you run, we can probably manage it whether it’s a network device or a firewall or a standard computer.

And does this at massive scale. We’ve got tens of millions of machines under management by Puppet. And we’ve got sites that are more than 100,000 servers managed by just one Puppet infrastructure.

We’ve got great companies who are using Puppet to do interesting work with the datacenters, including NASA, GitHub, Intel, Bank of America, and a lot more.

So we’re excited to bring Puppet Enterprise to Azure. And I’m going to give a small example of what it looks like to use it here.

So this is a relatively standard — this is just the normal interface to using Puppet Enterprise. You can see here we’ve got some machines under management, small number here, and the green bars are every time a Puppet agent runs and updates your infrastructure, it says, hey, something happened. In the blue case, it says we actually had to make some sort of change to bring you into synch. In the green case, it just says we checked, everything is great, we didn’t have to do any extra work.

So in this case, I’m going to make some changes to our Windows Servers. I’m going to go to the Windows Server group. And what we want to do here is we’ve got an example machine that is running — you know, everyone uses the task manager for various things. I’ve heard there’s a better version of the task manager out there.

And so what we’re going to do is see what it takes to update those. It’s a pretty straightforward operation with Puppet. We go in and in Puppet, the class is essentially the way of referring to the code associated with the function we want to do. So we’ve got a Puppet module named Microsoft Sys Internal.

MARK RUSSINOVICH: Ah, you do have good taste in tools. (Laughter.)

LUKE KANIES: And with this, we associate this class with, hey, we want to do this work, the work associated with this with all our machines. And normally this change would propagate out to your whole infrastructure in the space of probably around a half an hour. If you’ve got 100,000 machines you have under management, you probably don’t want all of them hitting your servers at exactly the same time.

In this case, though, we’ve got the system running on a relatively tighter timeline. And so we go look at it, and now we’ve got the better, far more powerful version running.

This is a very small example of what you can do. You can manage complete application stacks. You can manage the infrastructure, all the kind of laying the bits down so that the system works up to I’ve got my whole application built, I’ve got my database, my application server, my Web server, things like that.

So it really is a powerful system for getting all the work done from the bare OS up to a functioning, running application. This is especially important in the cloud where the whole goal here is if you can get your virtual machine up in five minutes, but it still takes you three weeks to configure the server, that kind of defeats the point. So the goal here is to get the speed of configuration at the same rate as the speed of building the machine itself.

SCOTT GUTHRIE: Great. Well, this awesome collaboration between Puppet Labs and Microsoft, bringing Puppet into Azure is enabling our customers that use tools like Puppet to get started on Azure using their existing processes.

To talk about that, I’m going to ask on stage Daniel Spurling, who is from Getty Images to talk about how this is helping Getty move to Azure. (Applause.)

DANIEL SPURLING: Thank you. Good morning, everyone. We are Getty Images, we serve more than 1.5 million active customers in more than 185 countries, providing the best conceptual and editorial content in the world.

We brought stock photography into the digital age, pioneering the ability to license content and imagery online.

We push the envelope every day. When a photographer around the world takes a brilliant photo, we want that photo to be available for anyone in the world within minutes.

Because these assets you see on the screen here, they have a job to do. They evoke emotion. Corporations around the world, companies of every single type, rely on us to help them tell their story.

We also recently launched an embed product which allows anyone, anywhere, free of charge, to utilize over 40 million pieces of our high-quality imagery for noncommercial use. This is our first step into the consumer market.

Now, we ingest and manage millions of images and videos. With millions of new pieces of content added every single day. And with that, for Getty to succeed, technology must have the same scale, agility, and global footprint as our company does in order to support our massive content flow, from any corner of the world, from Tokyo to Rio De Janeiro.

We’re excited about the Microsoft Azure Cloud Platform, which works with our tools such as Puppet. It gives us the global scale and infrastructure anywhere that we need it, be it Japan or Brazil. We today actively use Puppet for automation and configuration management. And this will give us the agility and consistency we need to move across environments when we burst from our datacenter into an external cloud provider, landing it right every single time.

As our business grows and our requirements expand, we will need to continue to support more content across more devices, and therefore, our infrastructure needs to scale with us dynamically and without friction.

For that, the cloud that we choose and the tools that we use must truly be open and seamlessly support both our Windows and Linux environments.

That’s why we at Getty are excited about the Puppet Labs and Microsoft partnership, and the value that Puppet and Azure can bring to our business and customers. Thank you. (Applause.)

SCOTT GUTHRIE: So infrastructure as a service gives you a very flexible environment and enables you to manage it however you want.

Actually, before I go there, a whole bunch of announcements here. You can see, what we saw here this morning with infrastructure as a service capabilities, we’ve really made a bunch of improvements to it. And just from a short list here of a number of them, you saw capturing deployed, Visual Studio integration, the Puppet and Chef support.

We’re also excited to announce the general availability of our auto-scale service, as well as a bunch of great virtual networking capabilities including point-to-site VPN support going GA, new dynamic routing, subnet migration, as well as static internal IP address. And we think the combination of this really gives you a very flexible environment, as you saw, a very open environment, and lets you run pretty much any Windows or Linux workload in the cloud.

So we think infrastructure as a service is super-flexible, and it really kind of enables you to manage your environments however you want. We also, though, provide prebuilt services and runtime environments that you can use to assemble your applications as well, and we call these platform as a service capabilities.

One of the benefits of these prebuilt services is that they enable you to focus on your application and not have to worry about the infrastructure underneath it.

We handle patching, load balancing, high availability and auto scale for you. And this enables you to work faster and do more.

What I want to do is just spend a little bit of time talking through some of these platform as a service capabilities, so we’re going to start talking about our Web functionality here today.

One of the most popular PaaS services that we now have on Windows Azure is something we call the Azure Website Service. This enables you to very easily deploy Web applications written in a variety of different languages and host them in the cloud. We support .NET, NOJS, PHP, Python, and we’re excited this week to also announce that we’re adding Java language support as well.

This enables you as a developer to basically push any type of application into Azure into our runtime environment, and basically host it to any number of users in the cloud.

Couple of the great features we have with Azure include auto-scale capability. What this means is you can start off running your application, for example, in a single VM. As more load increases to it, we can then automatically scale up multiple VMs for you without you having to write any script or take any action yourself. And if you get a lot of load, we can scale up even more.

You can basically configure how many VMs you maximally want to use, as well as what the burn-down rate is. And as your traffic — and this is great because it enables you to not only handle large traffic spikes and make sure that your apps are always responsive, but the nice thing about auto scale is that when the traffic drops off, or maybe during the night when it’s a little bit less, we can automatically scale down the number of machines that you need, which means that you end up saving money and not having to pay as much.

One of the really cool features that we’ve recently introduced with websites is something we call our staging support. This solves kind of a pretty common problem with any Web app today, which is there’s always someone hitting it. And how do you stage the deployments of new code that you roll out so that you don’t ever have a site in an intermediate state and that you can actually deploy with confidence at any point in the day?

And what staging support enables inside of Azure is for you to create a new staging version of your Web app with a private URL that you can access and use to test. And this allows you to basically deploy your application to the staging environment, get it ready, test it out before you finally send users to it, and then basically you can push one button or send a single command called swap where we’ll basically rotate the incoming traffic from the old production site to the new staged version.

What’s nice is we still keep your old version around. So if you discover once you go live you still have a bug that you missed, you can always swap back to the previous state. Again, this allows you to deploy with a lot of confidence and make sure that your users are always seeing a consistent experience when they hit your app.

Another cool feature that we’ve recently introduced is a feature we call Web Jobs. And this enables you to run background tasks that are non-HTTP responsive that you can actually run in the background. So if it takes a while to run it, this is a great way you can offload that work so that you’re not stalling your actual request response thread pool.

Basically, you know, common scenario we see for a lot of people is if they want to process something in the background, when someone submits something, for example, to the website, they can go ahead and simply drop an item into a queue or into the storage account, respond back down to the user, and then with one of these Web jobs, you can very easily run background code that can pull that queue message and actually process it in an offline way.

And what’s nice about Web jobs is you can run them now in the same virtual machines that host your websites. What that means is you don’t have to spin up your own separate set of virtual machines, and again, enables you to save money and provides a really nice management experience for it.

The last cool feature that we’ve recently introduced is something we call traffic manager support. With Traffic Manager, you can take advantage of the fact that Azure runs around the world, and you can spin up multiple instances of your website in multiple different regions around the world with Azure.

What you can then do is use Traffic Manager so you can have a single DNS entry that you then map to the different instances around the world. And what Traffic Manager does is gives you a really nice way that you can actually automatically, for example, route all your North America users to one of the North American versions of your app, while people in Europe will go routed to the European version of your app. That gives you better performance, response and latency.

Traffic Manager is also smart enough so that if you ever have an issue with one of the instances of your app, it can automatically remove it from those rotations and send users to one of the other active apps within the system. So this gives you also a nice way you can fail over in the event of an outage.

And the great thing about Traffic Manager, now, is you can use it not just for virtual machines and cloud services, but we’ve also now enabled it to work fully with websites.

And to show off all these great Web capabilities, as well as some of the great improvements that we’re making inside Visual Studio, I’d like to invite Mads Kristensen on stage. (Applause.)

MADS
KRISTENSEN: Thanks. Hi, folks. I’m really excited to be here today to show you a few of the brand new features for Web developers in Visual Studio and Azure Websites.

So let me start by creating a brand new ASP.NET Web application. And as a new thing, as you can see, we made it really easy for you to provision both Azure Websites and virtual machines directly from this dialog.

You can even provision a new database directly from here, which lets you set up your entire development environment ahead of time.

So now my project is created and Visual Studio is provisioning Azure, but it’s also now creating publishing scripts that I can use to automate my deployment.

And I’m going to open it here in the brand new PowerShell editor in Visual Studio. I can make any modification easily. And I can even use this in my continuous integration environment.

Now, let me switch to an existing website. So this is an ASP.NET application using AngularJS on the front end, and I’ve been building this with a few friends of mine. And if we just take a look here in the browser, we can see that this is called Clip Me. And it allows me to upload animated GIFs and have text burned into the animations automatically so it’s easy to share those images.

And we’ve been working really hard on this website. And my friends have asked me here, can you please just change the background color of the header of my website?

You know, I said, “Yes, of course I can do that.” So the thing is, I can’t really remember what is the style sheet that I need to find in Visual Studio and exactly where do I find the rule set that I need to change.

Normally, what I would do is that I would open the browser development tools here, and then I would make my modifications here. And when I’m happy, I would go back to Visual Studio and I would do the same thing all over again.

But now I can simply just change directly here in Visual Studio — in the browser, and as I make the changes, Visual Studio automatically syncs any change I do in the developer tools. (Applause.)

I’m just going to go with my favorite shade of blue here. Now, this is using a feature called Browser Link. And Browser Link is a two-way communication channel between Visual Studio and any Web browser.

So this works in Chrome, for instance, as we have right here. And notice how my favorite shade of blue has already been applied to the header. Because if I make changes in one browser’s developer tools, Visual Studio and Browser Link makes sure to stream that change into any other browser.

So now here in Chrome, I’m noticing that I actually made a typo here. I have repeated a word. So I’m going to use Browser Link again. Now, I’m going to put Chrome into design mode.

So now as I hover over any element in the browser, Visual Studio opens the exact source file and highlights my selection. (Applause.) Yes.

So now it’s as simple as simply just double clicking the thing that I want to change and as I make the change in the browser, Visual Studio just follows along. That is cool. (Applause.)

So this website is getting bigger and bigger, and I have a lot of JavaScript code in my project. Now, the problem with having a lot of JavaScript can sometimes be that to make sure that we’re following the best practices all the time.

So let’s just open here one of my AngularJS directives. And notice here that Visual Studio is now running JSHint directly inside Visual Studio. JSHint is a static code analysis tool that helps me catch common mistakes.

So in this case, I forgot a semicolon, that’s easy to fix. And here, I should use three equals signs instead of two. I always make this mistake. So here we go, I save the document, the errors go away, and I’m now pretty happy. All right? I think that I have a great website now, and I’m ready to publish.

So as Scott was mentioning, I can now take advantage of a new staging feature of Azure Websites. So I’m going to publish to my staging slot here. And we’re just going to hit the button.

So now Visual Studio is publishing just my changes. And what it’s done, it’s going to open the browser and if we just zoom in here real quick, we can see that we have the dash staging as part of the URL. So that makes it easy for me to find out that this is now my staging environment.

So now all my friends can now test the website in a staging environment. And when we’re all happy and make sure that everything works, we want to move this into production.

So to do that, I’m going to go to the Azure portal and I’m going to hit swap. So what happens is that my staging environment is being swapped for my production environment. But some of my configurations such as my SSL certificates and public domain names, they stay where they are.

And it only takes a few seconds, it’s already done. And now when we click the browse button, we’re now live in production.

So let’s actually just create a new meme here. I’m just going to drag in an image and give it some texture. So what’s happening is that when I upload the image, the website is passing the image processing off to a background task. Now, this is something that’s traditionally been a little bit problematic to do in a reliable and scalable way.

But I’m now able to take advantage of a new feature in Azure websites called Web Jobs. And this allows me to run background tasks in the same context as my website.

And you see, I already created one here called the Gif Generator. I can write a Web Job in any language that’s supported on Azure Websites. But let’s go take a look at my implementation here.

I’ve added a regular C# console application to my solution here. And I’m using the Web Job SDK, and that makes it really easy for me to listen in on any events that are happening in any of my Azure resources. And the rest of the implementation is just using regular .NET types and libraries.

So this is really easy to do. And in order for me to publish my Web Job, I need to associate it with my website. Now, I’ve already done that. And it’s as simple as going up to my Web application and just make the association here through the Gif Generator console. And now the next time I publish my website, the Web Job is being published with it.

Another cool feature that we get is we get a nice dashboard that shows me the invocation log. So here, every time my Web Job has run, sometimes maybe there’s a failure. And I can very easily go in, get the insights I need, see the input and the output, as well as a full call stack so I can very easily diagnose and fix the issue.

So now we’re in production. Obviously, this website is going to go viral. So I want to make sure that I get the best user experience to all you guys. So I’m going to use the Traffic Manager, a new thing in Azure Websites.

And I can set up Traffic Manager in three different ways. I can optimize for performance for round robin, or for failover.

So failover is the scenario where I select the primary node. And in case of any failures, Traffic Manager is automatically going to route traffic to my secondary nodes.

But since my code never fails, I’m going to optimize for performance. I’ve already set up Traffic Manager. So all I have to do here is to add my recently deployed website here into the Traffic Manager profile. And that ensures that no matter where you are in the world, you’re always going to hit the datacenter closest to you.

And now if you load the website up again, you can see we’re being served from West U.S. because we’re sitting right here in San Francisco.

So I just showed you how to easily create a development environment up front, use some of the new features of Visual Studio to create beautiful, modern, Web applications, deploy them to staging for testing, then on to production, and now scaling worldwide. Thank you very much. (Applause.)

SCOTT GUTHRIE: So as Mads showed, there are a lot of great features that we’re kind of unveiling this week. A lot of great announcements that go with it.

These include the general availability release of auto-scale support for websites, as well as the general availability release of our new Traffic Manager support for websites as well. As you saw there, we also have Web Job support, and one of the things that we didn’t get to demo which is also very cool is backup support so that automatically we can have both your content as well as your databases backed up when you run them in our Websites environment as well.

Lots of great improvements also coming in terms of from an offer perspective. One thing a lot of people have asked us for with Websites is the ability not only to use SSL, but to use SSL without having to pay for it. So one of the cool things that we’re adding with Websites and it goes live today is we’re including one IP address-based SSL certificate and five SNI-based SSL certificates at no additional cost to every Website instance. (Applause.)

Throughout the event here, you’re also going to hear a bunch of great sessions on some of the improvements we’re making to ASP.NET. In terms of from a Web framework perspective, we’ve got general availability release of ASP.NET MVC 5.1, Web API 2.1, Identity 2.0, as well as Web Pages 3.1 So a lot of great, new features to take advantage of.

As you saw Mads demo, a lot of great features inside Visual Studio including the ability every time you create an ASP.NET project now to automatically create an Azure Website as part of that flow. Remember, every Azure customer gets 10 free Azure Websites that you can use forever. So even if you’re not an MSDN customer, you can take advantage of that feature in order to set up a Web environment literally every time you create a new project. So pretty exciting stuff.

So that was one example of some of the PaaS capabilities that we have inside Azure. I’m going to move now into the mobile space and talk about some of the great improvements that we’re making there as well.

One of the great things about Azure is the fact that it makes it really easy for you to build back ends for your mobile applications and devices. And one of the cool things you can do now is you can develop those back ends with both .NET as well as NOJS, and you can use Visual Studio or any other text editor on any other operating system to actually deploy those applications into Azure.

And once they’re deployed, we make it really easy for you to go ahead and connect them to any type of device out there in the world.

Now, some of the great things you can do with this is take advantage of some of the features that we have, which provide very flexible data handling. So we have built-in support for Azure storage, as well as our SQL database, which is our PaaS database offering for relational databases, as well as take advantage of things like MongoDB and other popular NoSQL solutions.

We support the ability not only to reply to messages that come to us, but also to push messages to devices as well. One of the cool features that Mobile Services can take advantage of — and it’s also available as a stand-alone feature — is something we call notification hubs. And this basically allows you to send a single message to a notification hub and then broadcast it to, in some cases, devices that might be registered to it.

We also support with Mobile Services a variety of flexible authentication options. So when we first launched mobile services, we added support for things like Facebook login, Google ID, Twitter ID, as well as Microsoft Accounts.

One of the things we’re excited to demo here today is Active Directory support as well. So this enables you to build new applications that you can target, for example, your employees or partners, to enable them to sign in using the same enterprise credentials that they use in an on-premises Active Directory environment.

What’s great is we’re using standard OAuth tokens as part of that. So once you authenticate, you can take that token, you can use it to also provide authorization access to your own custom back-end logic or data stores that you host inside Azure.

We’re also making it really easy so that you can also take that same token and you can use it to access Office 365 APIs and be able to integrate that user’s data as well as functionality inside your application as well.

The beauty about all of this is it works with any device. So whether it’s a Windows device or an iOS device or an Android device, you can go ahead and take advantage of this capability.

What I’d like to do is invite Yavor on stage to show us how you can do it. (Applause.)

YAVOR GEORGIEV: Thank you.

Since we launched Mobile Services, we’ve seen some strong adoption and some great apps built on top of our platform, both across the consumer and enterprise space.

If you’re not familiar with it, Mobile Services lets you easily add a cloud-hosted back end to your mobile app, regardless of what client platform you’re using.

I’m here today to talk about an exciting new set of features that makes Mobile Services even more compelling, especially in the enterprise space.

Let’s start in Visual Studio. We’ve added a new project template that lets me build my mobile service right from within VS. What’s even cooler is I can now use any .NET language. And our framework is built on ASP.NET Web API, which means I can bring my existing skills, my existing code, and I can leverage the power of NuGet.

Now, we already have a project ready here that I created using the template. And you’ll notice it has a simple structure, contains only the things I need to know about.

I have a to-do item, and that’s going to be the model for my service. And the next thing I need is a table controller that lets me expose that model to the world in a way that all our cross-platform clients understand.

And then it wouldn’t be Mobile Services without great support for scheduled jobs. So I’ll go ahead and press F5. We finally addressed one of our top customer requests and added support for local development. We got a documentation stage right here with information about your API, and then we’ve even added a test client right inside the browser that lets you try it out.

I’ll go ahead and send a request. And I’ve hit a break point in my server code. Local and remote debugging now both work great with Mobile Services. (Applause.)

Now, as expected, I get my result back in the browser. But we’ve all built to-do lists with Mobile Services. What I really wanted to build for you today is a powerful line-of-business app with the cloud.

So I was preparing here on the podium, and I noticed the mouse I was given is pretty broken. So I thought, why don’t I build a facilities app that I can use to report the issue? And then the facilities department can use that same app to take care of it. This is easy to do with Mobile Services.

The first thing I’ll do is I’ll add a class for my model. And let’s call it “facility request.” And by default, this is going to use entity framework code first, backed by a SQL database. However, as Scott mentioned, we support a variety of back-end choices including MongoDB and table storage.

The next step is to add a controller. We have first-class support for the Mobile Services Table Controller right here in the scaffolding dialog.

I can pick the model class I created, pick the context, and just press add, it’s that easy.

Now, this wouldn’t be a great enterprise app without great enterprise security. So let’s assume for a moment that my company has already federated our on-premises Active Directory with Azure.

Adding authentication to my API is as easy as adding attribute to my controller.

Now that we’re done with our service, let’s go ahead and publish. We’ve integrated Mobile Services in the same publishing experience that Mads demoed earlier. And I can pick an existing service or I can even create a new one right from VS.

Let’s pick one I’ve created previously. And when I publish, this will deploy my code to Mobile Services, which provides a first-class hosting environment from my APIs.

Now, let’s switch for a moment to the client app we’ve built. You’ll notice our app logic is abstracted away in a portable class library. And what that lets me do is easily reuse that code across a variety of client platforms.

We’re already taking advantage of the Mobile Services portal SDK, which gives me some easy data access methods, such as the one you see here that loads up all the facility requests from the server.

Now, what my app is actually missing is support for authentication. So let’s go ahead and do that.

I can take advantage of the Active Directory authentication library, which gives me a native, beautiful login experience on all my favorite clients. I can then pass the authentication token to the Mobile Services back end so then my user is logged into both places.

Let’s launch this in the simulator. The first thing it will ask me to do is log in with my company account.

And once I’m signed in, it’s going to go and call out to our on-premises Active Directory and start pulling graph information about my user. As you see here.

Now, we don’t have a facility request created yet, so let’s go ahead and do that. So it’s a broken mouse on stage. And then we can even take a photo. There it is. Missing a button. Let’s take that picture.

And when I press “accept” the facility request will get safely stored in the Mobile Services back end.

Now, we’ve added authentication with Active Directory, but what my app’s users will really want is integration with all their other great enterprise services, including SharePoint and Office 365.

For example, the facilities department might want to create a document on their SharePoint site for every request they receive. It’s easy to build that for them with Mobile Services.

Let’s go back to our service project and to the controller we added. And we’ll find the patch method.

This method gets called every time a facility request gets updated. So I can very easily take advantage of the Active Directory authentication token to call out to the great, new set of Office 365 REST APIs, as you see here. And they let me generate the document on the fly and post it straight to SharePoint.

Now that we’ve made our changes, let’s go ahead and publish. And while that’s going, we can go back to our app and now play the role of the facilities department. So we get the request, we’re going to take care of the broken mouse here. And then we’ll take an after picture. There you go. Much better.

When I press “accept” my request will go through to Mobile Services and that will call out to SharePoint and generate the document.

And we can verify that by heading over to our SharePoint site, my company SharePoint site here, and if I look inside the request folder, you’ll see there’s a new document generated just a few seconds ago using my company identity.

The document itself contains all the great information that the mobile service sent over down to the images. It’s that easy to integrate Mobile Services and SharePoint. (Applause.)

Now, there’s one last thing I wanted to show you. We mentioned that our Apps Logic is a portable library. And what that means is I can use Xamarin to reuse my C# investments across Android and iOS as well. In fact, we have the iOS project ready right here in Visual Studio.

All I need to do is set the target as the iPhone Simulator and hit F5. I’ll go ahead and switch to a Mac here that I’ve paired with my Visual Studio instance. And that’s where the app is building and deploying right into the iOS simulator. And this is common in today’s bring-your-own-device world where your customers are going to expect the same great enterprise features in their favorite device. There you go. The same app running in the iPhone Simulator now. (Applause.)

Thank you. So what you just saw was building a .NET back end locally, publishing it to the cloud, adding authentication with Active Directory, integrating with SharePoint, and then building a cross-platform client with Xamarin.

The team and I are excited to see what you’ll go and build with these great new features. And if you’d like to learn more, please come and see me in my session this afternoon. Thank you very much. Scott? (Applause.)

SCOTT GUTHRIE: One of the things that kind of Yavor showed there is just sort of how easy it is now to build enterprise-grade mobile applications using Azure and Visual Studio.

And one of the key kind of lynchpins in terms of from a technology standpoint that really makes this possible is our Azure Active Directory Service. This basically provides an Active Directory in the cloud that you can use to authenticate any device. What makes it powerful is the fact that you can synchronize it with your existing on-premises Active Directory. And we support both synch options, including back to Windows Server 2003 instances, so it doesn’t even require a relatively new Windows Server, it works with anything you’ve got.

We also support a federate option as well if you want to use ADFS. Once you set that environment up, then all your users are available to be authenticated in the cloud and what’s great is we ship SDKs that work with all different types of devices, and enables you to integrate authentication into those applications. And so you don’t everyone have to have your back end hosted on Azure, you can take advantage of this capability to enable single sign-on with any enterprise credential.

And what’s great is once you get that token, that same token can then be used to program against Office 365 APIs as well as the other services across Microsoft. So this provides a really great opportunity not only for building enterprise line-of-business apps, but also for ISVs that want to be able to build SaaS solutions as well as mobile device apps that integrate and target enterprise customers as well.

And what I’d like to do is invite Grant Peterson, who is the CTO of DocuSign, on stage to talk about how they’re integrating this functionality into their iOS app. (Applause.) Thanks, Grant.

GRANT PETERSON: Thank you. Good morning. DocuSign is the global leader in digital transaction management. Our global cloud platform makes it possible for people to take personal and professional business and transact it for signature anyplace on the planet and keep it 100 percent digital.

So I know a lot of you have probably experienced DocuSign either doing a real estate transaction or possibly signing an agreement with Microsoft. We’re excited to show that today.

We have about 95,000 clients and about 40 million users worldwide using DocuSign every day in order to keep their business, personal and professional, digital.

So today we’re going to show an application. I’d like to tell you that the DocuSign service is built entirely on Microsoft technology. The back end of our service is built on SQL Server, it’s run no IIS, it’s written in C#.NET, and we run our services, including Azure as part of our infrastructure.

So today we’re going to show this, and we’re going to show it on a device that I never expected to show at a Microsoft conference, on an iPhone. (Applause.)

So the DocuSign app has been downloaded about 3 million times on the iPhone. People use it every day to do transactions when they’re on the run. And it has an exciting new feature that I’m going to show, which is authenticating with Active Directory.

So we can use Active Directory now, allowing people to use their enterprise credentials, which they prefer, for managing their devices and their access. And for the first time, they can log in with those credentials on a mobile device. Sorry, it’s hard to type. I just have to say DocuSign and Azure trust each other, and I’m now authenticated into DocuSign using Active Directory.

Now, I would do a standard transaction that I might do on the run. Another exciting feature is I’m going to do a facilities request similar to the demo we saw a little bit earlier, but that request, I need to make an additional signature and do it on the run from my SharePoint account.

So I just pulled this document up on SharePoint with an integration there. And now I’m going to go ahead and sign that. So it’s loading in the DocuSign service. It’s always slower in a demo.

So what I’m going to do is I’m going to put my signature in this document. And I’m going to go ahead and draw my signature. Place it in a document. And I’m done.

So what’s fun about this is I was able to sign it on the run. Now, I don’t want it on paper ever. And another great feature, not only was I able to sign it on the run using my Active Directory credentials, I was also able to pull it from SharePoint in order to get the document to sign. And I just saved it back to SharePoint, which also can happen automatically, so that our organization, from a legal perspective or from my own perspective, can gain access to that document at any point in the future. So no more paper gets lost, nothing hits the floor, and there’s the signature in the document in SharePoint straight from my mobile device on my iPhone. (Applause.)

So we were very excited about this integration project. And one of the things we were most thrilled with was how easy it was to do.

Here’s a small code sample of what it took to push the document back to SharePoint. So we found this project very approachable, we’re excited about the tools, and we’re extremely excited about what’s possible on Azure today. So thank you. (Applause.)

SCOTT GUTHRIE: So I think one of the things that’s pretty cool about that scenario is both the opportunity it offers every developer that wants to reach an enterprise audience. The great thing is all of those 300 million users that are in Azure Active Directory today and the millions of enterprises that have already federated with it are now available for you to build both mobile and Web applications against and be able to offer to them an enterprise-grade solution to all of your ISV-based applications.

That really kind of changes one of the biggest concerns that people end up having with enterprise apps with SaaS into a real asset where you can make it super-easy for them to go ahead and integrate and be able to do it from any device.

And one of the things you might have noticed there in the code that Grant showed was that it was actually all done on the client using Objective-C, and that’s because we have a new Azure Active Directory iOS SDK as well as an Android SDK in addition to our Windows SDK. And so you can use and integrate with Azure Active Directory from any device, any language, any tool.

Here’s a quick summary of some of the great mobile announcements that we’re making today. Yavor showed we now have .NET backend support, single sign-on with Active Directory.

One of the features we didn’t get a chance to show, but you can learn more about in the breakout talk is offline data sync. So we also now have built into Mobile Services the ability to sync and handle disconnected states with data. And then, obviously, the Visual Studio and remote debugging capabilities as well.

We’ve got not only the Azure SDKs for Azure Active Directory, but we also now have Office 365 API integration. We’re also really excited to announce the general availability or our Azure AD Premium release. This provides enterprises management capabilities that they can actually also use and integrate with your applications, and enables IT to also feel like they can trust the applications and the SaaS solutions that their users are using.

And then we have a bunch of great improvements with notification hubs including Kindle support as well as Visual Studio integration.

So a lot of great features. You can learn about all of them in the breakout talks this week.

So we’ve talked about Web, we’ve talked about mobile when we talk about PaaS. I want to switch gears now and talk a little bit about data, which is pretty fundamental and integral to building any type of application.

And with Azure, we support a variety of rich ways to handle data ranging from unstructured, semistructured, to relational. One of the most popular services you heard me talk about at the beginning of the talk is our SQL database story. We’ve got over a million SQL databases now hosted on Azure. And it’s a really easy way for you to spin up a database, and better yet, it’s a way that we then manage for you. So we do handle things like high availability and patching.

You don’t have to worry about that. Instead, you can focus on your application and really be productive.

We’ve got a whole bunch of great SQL improvements that we’re excited to announce this week. I’m going to walk through a couple of them real quickly.

One of them is we’re increasing the database size that we support with SQL databases. Previously, we only supported up to 150 gigs. We’re excited to announce that we’re increasing that to support 500 gigabytes going forward. And we’re also delivering a new 99.95 percent SLA as part of that. So this now enables you to run even bigger applications and be able to do it with high confidence in the cloud. (Applause.)

Another cool feature we’re adding is something we call Self-Service Restore. I don’t know if you ever worked on a database application where you’ve written code like this, hit go, and then suddenly had a very bad feeling because you realized you omitted the where clause and you just deleted your entire table. (Laughter.)

And sometimes you can go and hopefully you have backups. This is usually the point when you discover when you don’t have backups.

And one of the things that we built in as part of the Self-Service Restore feature is automatic backups for you. And we actually let you literally roll back the clock, and you can choose what time of the day you want to roll it back to. We save up to I think 31 days of backups. And you can basically rehydrate a new database based on whatever time of the day you wanted to actually restore from. And then, hopefully, your life ends up being a lot better than it started out.

This is just a built-in feature. You don’t have to turn it on. It’s just sort of built in, something you can take advantage of. (Applause.)

Another great feature that we’re building in is something we call active geo-replication. What this lets you do now is you can actually go ahead and run SQL databases in multiple Azure regions around the world. And you can set it up to automatically replicate your databases for you.

And this is basically an asynchronous replication. You can basically have your primary in rewrite mode, and then you can actually have your secondary and you can have multiple secondaries in read-only mode. So you can still actually be accessing the data in read-only mode elsewhere.

In the event that you have a catastrophic issue in, say, one region, say a natural disaster hits, you can go ahead and you can initiate the failover automatically to one of your secondary regions. This basically allows you to continue moving on without having to worry about data loss and gives you kind of a really nice, high-availability solution that you can take advantage of.

One of the things that’s nice about Azure’s regions is we try to make sure we have multiple regions in each geography. So, for example, we have two regions that are at least 500 miles away in Europe, and in North America, and similarly with Australia, Japan and China. And what that means is that you know if you do need to fail over, your data is never leaving the geo-political area that it’s based in. And if you’re hosted in Europe, you don’t have to worry about your data ever leaving Europe, similarly for the other geo-political entities that are out there.

So this gives you a way now with high confidence that you can store your data and know that you can fail over at any point in time.

In addition to some of these improvements with SQL databases, we also have a host of great improvements coming with HDInsight, which is our big data analytics engine. This runs standard Hadoop instance and runs it as a managed service, so we do all the patching and management for you.

We’re excited to announce the GA of Hadoop 2.2 support. We also have now .NET 4.5 installed and APIs available so you can now write your MapReduce jobs using .NET 4.5.

We’re also adding audit and operation history support, a bunch of great improvements with Hive, and we’re now Yarn-enabling the cluster so you can actually run more software on it as well.

And we’re also excited to announce a bunch of improvements in the storage space, including the general availability of our read-access geo-redundant storage option.

So we’ve kind of done a whole bunch of kind of deep dives into a whole bunch of the Azure features.

Kind of go up-stack now and talk a little bit about our programming languages and tools. And one of the great things about Azure, obviously, is you can use any tool, you can use any language to program against all of these services. But in particular, as you’ve hopefully seen this morning, we’ve done a really great job in making sure .NET and Visual Studio provide a first-class experience and some real great productivity that you can take advantage of.

Beyond just the Azure integration, though, there’s a whole bunch of other improvements that we’re making with .NET this week. You heard about some of them that we announced yesterday.

One of the things I wanted to talk about a little bit today and highlight is some of the great work we’re doing in the language space. In particular with a project that we called “Roslyn,” which is our new .NET compiler platform. And I’m really excited to invite Anders Hejlsberg on stage to show it off. (Applause.)

ANDERS HEJLSBERG: Thank you, Scott. Thanks, everyone. So “Roslyn” is the name we use for our project to create the next generation of our C# and Visual Basic compilers, and also the services that are associated with them in Visual Studio.

A key design point of “Roslyn” has been to change compilers from just being black boxes that take source code in and produce object code out, and turn them into APIs that allow everyone, all tools, to share in the intimate knowledge of code that only a compiler has.

With “Roslyn,” you basically get full fidelity, complete API sets that allows you to generate, transform and analyze code. And this allows new experiences in our IDE such as an interactive prompt, better refactorings, new diagnostics and so forth.

And because our C# and VB compilers are now written in C# and VB, we can also be a lot more agile when it comes to implementing new language features.

So first thing I’m pleased to announce today is an end-user preview of the “Roslyn” technology for Visual Studio 2013. I have that on my machine right here installed. In fact, installing it is almost a no op. It’s just a v6 that you download and drop into Visual Studio. And when you enable it, you get a preview of what’s coming in the next version of Visual Studio, the new language services, the new C# 6.0, the next generation of VB.

So let me try and play with it a little bit. One of the things that, for example, now that we can more easily write new language features, we’ve listened to your requests. And one of the requests that we’ve seen a lot is the ability, for example, to have static usings. Meaning why do I, for example, have to write math.pi all the time? Couldn’t I just import math, or using math, and then simply say pi?

So let’s try to look at what that might look like in C# 6.0. So we can say using system.math here, and when I do that, you’ll see a few things happen. First, things gray out here because it’s redundant to both have a using a double qualification down here. But also the IDE now suggests a refactoring where we can simplify the type name. And because of “Roslyn,” we can now show you a preview of what this refactoring is going to do. Here it’s just going to change it to pi. So if I say simplify type name, you see now that also the system ungrays because it’s now being used. (Applause.) Thank you. Thank you.

We could also look at whatever other refactorings the IDE suggests here. We could introduce a local, or we could extract as a method. And, again, we use “Roslyn” to show you a preview of what’s going to happen to your code before it actually happens.

Here, I’m just going to say yes, and then it drops me immediately into a “Roslyn”-powered rename experience where I get to name the new method.

Now, let’s say that I name it something that already exists in my program. Now we show you this might not be a good idea, if you do, here are the following things that are going to go wrong.

But I might also, for example, name it a name that I could use, but I would be required to then qualify the identifier and the IDE then shows me what would happen there.

OK. So with Roslyn here today, we’ve thought deep about how can we take all of this functionality and give it the biggest impact possible? How can we allow you to use all of these fantastic features?

And it’s a great pleasure for me to announce today that effective today, we are open-sourcing the entire “Roslyn” project. (Applause.) Thank you.

So let’s go look at that. I’m going to go to Roslyn.Codeplex.com. And here you see the home page for the “Roslyn” project. It’s still private. We haven’t published it yet. Let’s just look at what’s in here.

We can go look at the source code here. Go to compilers, for example, C#. Source. Compiler. There, and here we see the source code for the C# compiler, or part of it. Thank you. (Applause.)

Also, here’s the command I could use to create a clone of this repository. But, of course, first, we need it to be public. What do you think? Should I press it? (Cheers, applause.) All right. Here we go. “Roslyn” is now open source. (Cheers, applause.) Thank you. Thank you. Thank you.

So now that “Roslyn” is open source, it would be fun to take a look at what we can do with the “Roslyn” project. Of course one of the things you’d be able to do is now implement your own language features. So why don’t we try to implement a language feature here?

Now, we can’t be too ambitious in a keynote. But I was thinking that maybe we could introduce a new kind of string literals. In fact, how about angle-quoted or French-quoted string literals?

So let’s see if we can make the compiler support this kind of string literal, which of course it currently doesn’t. You see that we get red squigglies and so forth.

Now, let’s try and save this. And I’ve already cloned the “Roslyn” repository on this machine. So let me try to open up the “Roslyn” solution, which is right here. Here you see all of our compilers, language services and so forth.

And we’re going to look into a C# code analysis and we’re going to go into the parser. And then we’re going to grab the lexor, which is the thing that creates tokens out of our source code. And then we’re going to go look at scan string literal, and this is the place where the compiler scans. And you see it concurrently recognizes single and double quotes. So we’re going to add an extra case for French quotes. And then we’re going to navigate to the actual implementation of scanned string literals, and we’re going to add one extra little case here for the opening quote. And then in the if statement that handles the closing quote, we’re going to modify that.

Those three little changes is basically all we need to do. So now I’m going to press control F5 and build and launch a second hive of Visual Studio with my modified compiler in it that hopefully will support this fantastic new language feature. So let’s see what happens here.

Here is Visual Studio now running. You can see it’s a different hive. It’s a different color. I can open up my area calculator project, and low and behold, we now support French quoted string literals. (Applause.) Thank you.

In fact, we even offer up refactorings that use French-quoted string literals because all of these technologies are just sort of automatically integrated.

So all I got to do now is package up a pull request and submit it then we’ll see if we can get our new language feature into the language. Of course I will probably disappoint myself and not accept it. (Laughter.)

So now that “Roslyn” is open source, it also means that the “Roslyn” compilers can be used on other platforms. And when it comes to cross-platform, who better to invite on stage that our good friend Miguel de Icaza from Xamerin? (Applause.) Come on up, Miguel. (Applause.)

MIGUEL DE ICAZA: Thank you, Anders.

ANDERS HEJLSBERG: You bet.

MIGUEL DE ICAZA: Thank you very much. Hello, everybody. I’m very excited to be here. As you guys know, we’ve been working on bringing C# and .NET to other platforms for almost 12 years.

To celebrate the occasion, to celebrate the open-sourcing of “Roslyn,” we figured we would give everybody one of these nice C# shirts because we really like the C# language. (Applause.) Everybody knows it’s better than the Xbox, so feel free to exchange them. (Laughter.)

But the way that we did it is that we actually created an iOS app. We created an iOS app, and the trick is you need to run this little iOS app to place your order. So if you want to get it, just come to my session later today and I’ll show you how to do it.

But this is the app. It’s running on the iOS Simulator. And you can pick a men’s or a women’s shirt. This is an app that we just put together this morning. No, no.

So you go, you order your shirt, and then you go to the checkout. Just press checkout here. And there’s a little problem. The only requirement that you have actually is that you enter your e-mail address before you order your shirt. That’s all you need to do.

So what I’m going to do is I’m going to go and enter my login address here. And I’m going to use the exact same change that Anders just showed on stage. So instead of using the regular quotes, I’m going to use the French quotes here. So I hope that you guys can see that.

Now, if I try to build this project, you’ll see that I get an error. This is because, currently, the IDE is using the mono C# compiler. But since “Roslyn” was just open-sourced, we’ve primed this machine to allow me to switch to a mono installation that has the “Roslyn” compiler already installed. And also has the patch that Anders just showed.

So this time around, when I build the project, it actually succeeds. (Applause.) Yeah.

Now, you’ll notice that the IDE still has the squigglies. So unlike Visual Studio that already has “Roslyn” integrated, we haven’t really integrated into our IDE yet. But we’re going to do that. So that’s our next step.

So now when you run the application, you can actually — this time around, it has my e-mail address. I’ll go here. I’ll add my shirt. I’m just going to go with green, just like this one. And I’m going to check out.

And this time around, it asks me for my login and password. So close your eyes for a second. All right. Yeah. And I’m going to deliver this to my place in Boston. And that is it. And that’s it, guys. (Applause.)

Now, we just wanted to give this little gift to Anders. It’s a pre-packaged shirt with a nice C# button.

ANDERS HEJLSBERG: Thanks.

MIGUEL DE ICAZA: To celebrate the occasion. Thank you. (Applause.)

ANDERS HEJLSBERG: Thank you. (Applause.)

SCOTT GUTHRIE: Thanks, guys.

So one of the things that we’ve been focusing on with .NET over the last couple of years is how do we embrace open source more? We’ve started with ASP.NET and a lot of the projects that we’ve put already into open source. And you saw with today’s announcement, an even bolder statement, which is now the C# compiler and the “Roslyn” infrastructure we’re contributing as well.

One of the things we’re kind of excited to announce today is we’re actually taking kind of the next step in terms of open source. And we’re actually announcing a new .NET foundation that we’re going to use as kind of the umbrella for how all these projects get contributed. And it’s really going to be the foundation upon which we can actually contribute even more of our projects and code into open source.

And we’re launching it today with the .NETFoundation.org website, and you can see here we’ve taken all the Microsoft contributions we’ve already done with open source and are putting them under the foundation’s umbrella. All of the Microsoft contributions have standard open source licenses, typically Apache 2, and none of them have any platform restrictions, meaning you can actually take these libraries and you can run them on any platform and take advantage of them.

We still have, obviously, lots of Microsoft engineers working on each of these projects. But as Anders highlighted in his demo, this now gives us the flexibility where we can actually look at suggestions and submissions from other developers as well and be able to integrate them into the mainline products, and it’s something we’re really excited to launch.

We’re also really excited that we have a whole bunch of other folks within the community that are joining the foundation as well. Xamerin is very generously contributing a number of their libraries. And we expect to see many other companies contribute as well. And we’ve got a great set of advisors that are joining and contributing as part of the foundation from a variety of different companies. And we think this is really going to take .NET to the next level and add even more energy and activity within the community. So it’s something we’re really excited about. (Applause.)

So we’ve talked today about a whole bunch of features. I think we said we had 44 announcements this morning. I think maybe we have two more left, and they’re pretty big ones. And it’s basically, you know, what we’re working on is not only providing lots and lots of these great features, but also an integrated experience for how you stitch them together and actually take maximal advantage of them.

As Azure is evolving to have more and more capabilities, we really want to have a new experience that we provide our customers that enables them to see all these different services, to be able to monitor all these different services, to have a consistent dev-ops flow across all these different services and be able to take all these different resources and all these different services that you’ve seen here this morning and provide a really rich way that you can integrate them together and manage them as applications as opposed to a lot of disparate, different features.

And one of the things we’re really excited to kind of show off for the very first time is our new Azure management portal, which kind of does very bold reimagining, if you will, of our portal experience and we think is really special.

And I’d like to invite Bill Staples on stage to show it off. (Applause.)

BILL STAPLES: Thank you. What an exciting time to be a developer, especially a cloud developer. I mean, think about it, every hacker’s dream. We have access to tens and hundreds of thousands of computers, petabytes of storage, millions of devices. And we can bring these things together into things we call apps. And sometimes we even make money off of it. (Laughter.)

At Microsoft, we’ve built a first-of-its-kind experience that puts those apps at the center and seamlessly integrates across the developer, the platform, and the infrastructure services that you know and love.

We’ve reimagined the cloud experience to be about your apps, your team, and you. Let’s go take a look.

You know, every great story has a beginning. And our journey today begins with a new Azure start board. Think of this as your own personal NOC. This dashboard is completely customizable. You can pin the parts, the tiles on here that show access to your services, your applications, the data that matters most to you.

Now, we’ve put a few parts on here by default. Probably the first one you notice, the biggest one, is the service health part. This provides you a global view of all the Microsoft Azure datacenters and their current health.

Now, as a cloud developer, knowing the health of the services you depend on is job No. 1. It’s especially helpful if in the middle of the night you get that page and you need someone to blame it on.

Let’s go ahead and check out what happens when I click on the service health part. We open what we call a blade. A blade is effectively a drill-down of the information, the next level of detail, if you will, from that part.

Here we can see the service health blade that gives me access to all of the Azure services, and a summary view of their health. I can then click into any of these services, for example compute, and we see all the regions where compute is deployed and the status in those regions.

And we can drill in, say what’s going on in West U.S. And we see, oh, there’s no problems, everything is running.

Now, if there was a problem here, I’m excited to say, we would actually show Mark Russinovich’s contact information so you could give him a phone call. (Laughter.) No, OK. You can tell I’m excited, that part was an exaggeration.

But this collection of blades that we’ve drilled into is what we call a journey. And it’s essentially a living bread crumb. You can now go back in the journey to see the previous context that got you to the part you’re focused on now. So I can quickly get from anywhere to anywhere.

For example, let’s say I want to look at the website service health. I can click on that, and we see now all the stamps where websites are deployed and their health.

We think this kind of modern information architecture and navigational structure is exactly what developers need building services in the cloud.

Now, another really popular request that we’ve gotten from all of you, in fact we’ve been out talking to you in the hallways about what you think of the current Azure experience. And probably the No. 1 request is more insight into what the services are costing me.

So you see the billing part here that shows me my primary subscription. Gives me a summary, effectively, of all of my usage on the subscription throughout this current billing period, access to summary of the last three months, and I can even drill into a particular subscription and see a comprehensive view of the charges going on that subscription.

I can see the number of days left. I can see the burn rate, how much credit I’ve used. And even cost by resource. So I can see how much storage, how much bandwidth, how much compute I’m using.

We even have a line item detail of all the charges that are going to show up on your bill at the end of the month as they’ve accrued so far. Just think, you’re never going to be surprised by bills again. (Applause.)

So creating new instances in the new Microsoft Azure Portal is just as easy and simple as before. Right down here in the left-hand corner, I’ve got the new plus sign, I can click on that. And you can see I can easily access the set of services that we’ve implemented in the current preview. Access to our most popular PaaS website service that we demoed earlier, SQL databases, team projects. We’ve even partnered with our good friends at ClearDB to provide MySQL database access in the new Azure portal.

Yes, this entire experience is completely extensible, and we’re opening up to all of our partners in the Azure ecosystem that want to integrate to provide a seamless management experience for everything you want in the cloud.

Browsing existing instances is just as easy. I click on the browse hub, and you can see I can browse by resource, or even open up everything that I’ve got deployed and quickly access it.

Let’s go ahead. I’ve actually pre-created a team project to help get us started today that I want to use to show off how easy it is to set up a dev-ops life cycle.

I’ve got my Build project. We’ll click into that. And what you’re seeing here is a blade the represents a team project. This team project is the same team project and is powered by the same services that back Visual Studio Online today.

We’re happy to announce that we’re integrating those into Azure to provide you a seamless and end-to-end dev-ops experience.

So this is a blank one. Let’s go ahead and do some interesting things with it. For example, let’s set up continuous deployment. That allows me to set up a deployment target so that when builds happen on this particular project, they’ll automatically be deployed into a staging or production website that I can then go and test.

Creating a new website is very simple. I just give it a name. Let’s call this one the Build Demo 2014.

I can then — it prompts me for a repository and a branch to deploy from once the build is complete. And with that simple process, I’ve now wired up a brand new website and a team project that can automatically and continuously deploy the code changes that I make.

Let’s go ahead and check in some code. So I launch Visual Studio. I just click on the “open in Visual Studio” part. You’ll notice it opens up and prompts me to clone the repository, so I can do that in just a few seconds.

And then we’ll go ahead and create a new project. Now, this demo is more about the experience than the app, unlike some of the cool demos you saw earlier.

So I’m just going to use the default MVC template for this purpose. I’ll go ahead and just check that in, call this the initial commit. And I’ll tell Visual Studio to both commit this to my local repository as well as push all the changes to the repository that’s in the team project in the cloud.

With that, I’ll minimize Visual Studio and you saw right here on the blade the commit lights up, I can drill into it, you can see a history of all the commits. In this case, I just got the one I made, and we can see the initial commit. I can drill even further now and see all the files that were committed as part of that change. And even drill in further if you want. I’ll show you maybe in a little while how I can see code view right here within the Microsoft Azure portal. (Applause.) All right.

Now, if your projects are anything like mine, they’re probably backed by some work items, a backlog of sorts, sometimes called bugs. And my team loves to file bugs against me, and it looks like they’re already at it telling me what to do backstage. They’re telling me I forgot to instrument the code with application insights.

We’re excited to announce that we’re also integrating application insights into the Microsoft Azure Portal to give you access to not only the great system-level monitoring that you get today with Azure, but also application-level and user-level analytics for your applications.

To do that, you simply add a few lines of JavaScript code to your app. So they’re telling me, paste this in, and it’s backlog item No. 22. So let’s go back.

I could open up Visual Studio again to make that change and commit and sync it just like before. But instead, let’s check out something pretty cool. Let’s say I’m on an airplane or on a tablet that doesn’t have Visual Studio installed. And I get a bug I need to fix.

Right here in the Microsoft Azure Portal, I can drill into my source code. I can, for example, go into my shared views. Edit the layout files where this JavaScript needs to go. And right here, I get a full screen, syntax colored, IntelliSense editor in the cloud. (Applause.)

I can paste in that JavaScript. And tell it we had — I think it was bug 23. Go ahead and commit that change. And in seconds, I’ve already committed that change to my repository without any tools necessary.

Now, just to prove that’s true, let’s go back to our commits master. You see now I’ve got two commits, the first one and the second one showing up. I can even drill in, I won’t do that to save time. And also I can come back to our build definitions. We can click on that, and you’ll see I’ve got the first build already triggered and running. As soon as that completes, it will automatically deploy into the website, and I’ve got a second build now queued with the change I just made in the Microsoft Azure Portal that will run right after the first build is complete.

Now, we don’t have time to let that finish, but let me just step back for a second and remind us what we’ve seen.

We’ve seen for the first time a fusion of our world-class developer services together with the platform and infrastructure services in Microsoft Azure in a way which no one else is doing. This allows you to have a complete dev-ops lifecycle in one experience. Pretty phenomenal.

I want to now sort of transition to the sort of operation side of the dev-ops lifecycle. We’ve got, I mentioned before, a new concept where we’re bringing together the various services that you use to compose an application into one concept.

If we go back to browse, you’ll notice we have something called resource groups in here. They’re pretty self-descriptive. They’re, effectively, the set of resources or services that make up an application.

For example, I’ve got a clip beam resource group here. This is the same website that you saw earlier in Mads’s demo. And we’ve been running it here for a while to give me a chance to demo it.

You’ll see when I open up a resource group, I get an aggregated view of not only the website, but the team project and the database associated with this app.

We show billing information, again, pervasive throughout the experience to give you an aggregated view of all the charges associated with the services linked to this application.

I can even click into monitoring and see, again, an aggregated view of all the operations that anyone who has access to my subscriptions has done against this particular resource group. Spanning all the services so I can see hosting plan, website updates, alert rules, et cetera, all aggregated in this one view.

Let me actually show you something else that’s pretty neat. Up here, the summary view, I get the topology of my app. I can click on, for example, the website and move from the resource group aggregated view into the website view.

The website blade has some pretty exciting new features to show that work with our website service. For example, I’ve got quick access to analytics. This is the App Insights analytics I mentioned earlier where I can see which browsers are most popular for my site. I can see the sessions, the devices that people are using. Even get a feel for which pages are slow or may need a tune-up.

With Web Test, I can actually measure the experience my customers are having from anywhere in the world. I can configure, for example, locations to monitor from and pages to hit, and then I can see the availability trend over the past week and the average response time my customers are enjoying.

Let’s go down a little bit here and show billing information. Again, pervasive throughout the thing. We can see this website is costing me about $30 a month. I can even drill into pricing tiers to see what other options there are maybe to save money or to add more capacity.

Right now, I’m running in the standard tier, which comes with all these great features. And if I decide based on the analytics that we saw earlier I need to scale up, I can simply click on a medium-sized instance, choose select, and this is one of the most awesome features of the website service. You’ll see we automatically scaled on demand, no configuration, no redeployment. You’re now running with twice the CPU and memory. (Applause.)

Now, if we go back up here, we can see not only can I navigate to the website, but I can also continue this journey on to see the database. And I can monitor its connections, I can scale it as well.

I’ll just take a pause for a second and remind us what we’re seeing. We’re seeing for the first time an application, a distributed application, side by side its website with analytics information. We’re seeing a team project with complete build, source code control, and we’re seeing a database all on the same portal, all in one experience, without having to switch tabs, switch portals, do OAuth handshakes, it’s all integrated in one.

Now, this kind of rich UI experience is pretty darn powerful. But let’s say I want to capture this resource group, maybe describe it in a declarative way so that I can check it into source code, anyone on my team can then re-create that application anywhere in the world. Wouldn’t that be cool without having to click around? All those mouse clicks?

Well, I’m happy to say that today with the Azure Resource Manager Preview, you can do just that. And to show that off, I’m going to actually exit out here and we’ll go to a PowerShell command line.

Now, we’ve had PowerShell support as well as cross-platform command line support for a while now in Azure. But this is something new. This is something special. We’ve added a new set of commandlets that work against our new Resource Manager service to provide you a declarative way to quickly deploy applications and services.

In this case, we’ve published some of the templates that we have as samples for you to quickly browse via PowerShell.

You can see we’ve got ASP.NET websites, PHP websites. We’ve even got these resource groups with a Web plus a database.

Let’s actually download one of those templates to start with and check it out. I’ve actually already downloaded one and edited it with some of the parameters that I want to provide as defaults to make things a little bit faster. But you can see, this is effectively a JSON file, a declarative way to describe the application and then go ahead and run that.

You’ll see we’ve got a section here for describing the database we want to create. Another section for describing the website we want to create. And even cooler, we can pass context between services, for example, pass the connection information from the database to the website when it’s being created, and configure that up so that I have a web.config with the database connection string in it by default, without having to look that information up.

Now, you could write code to do all of this today, but how much easier is it to just have it all right here in JSON and run a single command shell to execute it? Let’s go ahead and see that in action.

I’ve got a new Azure resource group commandlet. I’ll pass in the template file. And it’s going to prompt me for a couple of parameters I didn’t supply. So in this case, let’s call this the Build RG Demo. And we’ll give it a location, let’s do West U.S., right here next to home.

You’ll see what happens is we upload the template up into blog storage. We parse it, and then begin execution across the set of services. We actually execute the provisioning of these services in parallel to speed things up. And then we begin passing the context back and forth.

Not only do we do that, but we also set up the ancillary services that support the website and the database. Things like auto-scaling settings, alerts and notifications.

Combining this with the Puppet and Chef features you saw in Mark’s demo earlier provides you enormous power. Just think, you can now just deploy services that span platform services, infrastructure services anywhere in the world, completely customize them, and deploy them in minutes.

I just stood up a website and a database in less than a minute without writing a single line of code. Pretty amazing, huh? (Applause.)

Just to show it’s real as well, I wanted to come back to our portal. You’ll see I’ve got the resource group displayed, and already lit up we see the Build RG Demo resource group that I created earlier. We can click into it, we see the website, we see the database, we see the event information. Again, I didn’t execute this, I didn’t create this in the portal, but we still get all the rich diagnostics information. And I even got a notification that something was happening on the command line.

This could have been done by a co-admin on my account. I don’t need to care. I’ve got all the same access to the information right here in the Azure Portal.

Now, to conclude the operations part of this demo, I wanted to show you an experience for how the new Azure Portal works on a different device. You’ve seen it on the desktop, but it works equally well on a tablet device, and is really touch friendly. Check it out on your Surface or your iPad, it works great on both devices.

But we’re thinking as well if you’ve got a big-screen TV or a projector lying around your team room, you might want to think about putting the Microsoft Azure portal as your own personal NOC.

In this case, I’ve asked the Office developer team if we could have access to their live site blog. So they made me promise, do not hit the stop button or the delete button, which I promised to do.

This is actually the Office developer blog site. And you can see it’s got almost 10 million hits already today running on Azure Websites. So very high traffic.

They’ve customized it to show off the browser usage on their website. Imagine we’re in a team scrum with the Office developer guys and we check out, you know, how is the website doing? We’ve got some interesting trends here.

In fact, there was a spike of sessions it looks like going on about a week ago. And page views, that’s kind of a small part. It would be nice to know which page it was that spiked a week ago. Let’s go ahead and customize that.

This screen is kind of special because it has touch screen. So I can go ahead and let’s make that automatically expand there. Now we see a bigger view. Wow, that was a really big spike last week. What page was that? We can click into it. We get the full navigation experience, same on the desktop, as well as, oh, look at that. There’s a really popular blog post that happened about a week ago. What was that? Something about announcing Office on the iPad you love. Makes sense, huh? So we can see the Azure Portal in action here as the Office developer team might imagine it.

The last thing I want to show is the Azure Gallery. We populated the gallery with all of the first-party Microsoft Azure services, as well as the great partners that we’ve worked with so far in creating this gallery. And what you’re seeing right here is just the beginning. We’ve got the core set of dev-ops experiences built out, as well as websites, SQL, and MySQL support. But over the coming months, we’ll be integrating all of the developer and IT services in Microsoft as well as the partner services into this experience.

Let me just conclude by reminding us what we’ve seen. We’ve seen a first-of-its-kind experience from Microsoft that fuses our world-class developer services together with Azure to provide an amazing dev-ops experience where you can enjoy the entire lifecycle from development, deployment, operations, gathering analytics, and iterating right here in one experience.

We’ve seen an application-centric experience that brings together all the dev platform and infrastructure services you know and love into one common shell. And we’ve seen a new application model that you can describe declaratively. And through the command line or programmatically, build out services in the cloud with tremendous ease.

I’m so excited to show this off to you today. I hope you love it. Go check it out. And we hope to hear your feedback soon. Thank you very much. (Applause.)

SCOTT GUTHRIE: We’re really excited about this new experience, and really I think it kind of is a capstone in terms of what Azure offers. It lets you bring together IaaS, it lets you bring together PaaS services and fuse them together into single applications. It enables you to kind of light up the entire dev-ops experience from development to deploy to analytics and learning, and we think really hopefully this will be transformational in terms of enabling developers as well as operations to be much more smooth in the cloud and really leverage the full power of the cloud going forward.

And we’re really excited that starting today we’re opening up the new portal to all Azure customers. And so everything you saw here in the demo just a few minutes ago you’ll be able to do on your own subscription starting this afternoon. (Applause.)

We’re also announcing the preview of our new Azure Resource Manager, which allows you to orchestrate and take all those different services and basically program them together and be able to do a single deployment of a resource group in order to manage it. That ultimately will support all Azure services so you’ll be able to compose again both IaaS and PaaS pieces together. I’m really excited to announce the first preview today.

And then we’re also excited to announce the general availability of Visual Studio Online, which was the back-end services that we used to power all of those dev-ops scenarios. That’s now generally available. You’ll be able to use it both in the preview portal, as well as in our existing management portal, and enable you to kind of take advantage of these developer SaaS services in a unique way.

So hopefully you saw lots of stuff coming with Azure. We’ve got a busy year still ahead of us with lots more features that are going to be coming over the next couple events, but I hope you got a good taste of some of the things that are happening and some of the momentum that we have.

You can sign up if you’re not already an Azure customer today by going to Azure.Microsoft.com, and we’d love to have you give it a try, send us your feedback, and we hope you’re successful with the cloud.

Thank you very much. (Applause.)

And here’s Steve Guggenheimer to talk to our next keynote.

END

Related Posts