ANNOUNCER: Ladies and gentlemen, please welcome to the stage Microsoft Corporate Vice President Brad Anderson. (Applause.)
BRAD ANDERSON: Hey, good morning, everybody! Good morning! Thank you for being here. We’re incredibly, incredibly excited to be here, grateful that you’d take this time to spend with us.
And just because we’re in Spain here, mucho gusto. Estamos muy, muy appreciado que esta con nosotros esta semana y esperamos que tenemeos un buen tiempo aqui.
All right, let’s get started.
Everybody, it is such a great time to be in this industry, such a great time to be here. And one of the most common questions that I think I’m going to get today is going to be, is that tear-away tux app available, and I’ll tell you it’s only available in the Windows Phone Marketplace.
(Phone alarm beeping.)
Oh yeah, so I got her back just in time.
But time, everybody, this is the time where there’s just so much opportunity, so much innovation occurring in the industry, so many things happening that you can truly take advantage of to differentiate your organization, delight your customers, differentiate and progress your careers.
And one of the things I love about what we do as a group, what we do as an industry is we are all about making others great. That’s what we do. We develop and we build out the infrastructure and solutions that allow everyone around us to do amazing, amazing things.
And that really is what this keynote is about and what the whole conference is about. It’s about the innovations that are happening, the work that we’re doing here that’s going to allow you to embrace these trends in the industry, again to advance your business, to delight your customers, and then also advance your careers. So that’s what we hope we accomplish, and that’s what we hope you take away from the next couple of days here.
So let me give you an overview of what we’re going to cover. At Microsoft we describe ourselves as a devices and services company. And so obviously a lot of the work that we do goes into building world-class devices with our partners. And, in fact, on Thursday we’re going to have an entire keynote with Jon DeVaan from the Windows organization, and he’s going to walk you through the innovations and the work that we’re doing in Windows.
This morning, we’re going to focus on the services that we’re building that really light up the devices.
So we’re going to start talking about the work that we’re doing to enable your users to be productive across all their devices, and really take advantage of some of the trends like bring your own.
We’re then going to talk about the innovations that we’re doing that allow you to build rich and engaging applications for these devices and users to consume, and what are we doing in that area.
Now, of course, these applications generate a lot of data. So how are we going to enable you to pull insights from that data, and what are we doing to make sure that your users can take advantage of just the explosion of data to really understand what’s happening in your business?
And then finally, we’re going to talk about the cloud platform and the work that we’re doing in the cloud platform to host all of this and make all of this possible.
And that’s what we’re going to cover in the next hour and a little bit of change.
So let’s start now with what we’re doing on the devices to enable your users to be productive. It’s what we call people-centric IT.
So you think about it for just a minute here, what do your users want? End users want to be able to work on any device, they want to be able to just walk up, identify who they are, have a customized experience delivered to them that gives them secure access to what they need to do their jobs. So secure access to their applications and to the data, that’s what they’re asking for.
But yet the world is getting more complex than it was when we were together say a year ago. When we were together a year ago in Amsterdam, since that time there’s been over 1.2 billion smart devices sold around the world that users want to use. Fifty percent of the IT organizations around the world have been mandated — mandated to support consumer devices for their executives. Fifty percent of our users in our organizations in their 20s believe that bring your own device is a right, not a privilege.
And with all this change happening, what we’re seeing as we go out and query the world is more than 80 percent of the organizations are telling us that they’re being asked to enable users on these devices but their budgets are not being increased at all.
So we’ve got to figure out how to be a lot more efficient in how to leverage and take advantage of what you already have deployed, and that really is where we’ve been focused at.
So what you’re going to see today is you’re going to see some of the work that we’ve done that really focuses on these key points, and these are some of the design principles that we had in the releases that we’ve been working on.
We have really focused on your end users and how do we enable you to empower your end users, empower them to be productive across all of their devices.
We’ve worked hard to unify your environment, so the ability to do PC management, device management, anti-malware protection, all of that through a common infrastructure bring your costs down, give a consistent experience to yourselves, the IT professionals and to your end users, but unify that environment, get the most benefit out you can, and decrease your costs. And finally, do this in a way that helps you to protect all of the company data and all the company assets. Those were the goals.
Now let’s talk a little bit about how we did this, and it all really begins with the user. In the world that we live in, the world that we need to be able to deliver is in a capability for all of you to set policy and express policy that governs access to applications and data, first of all based on the user, then on the device, and then on the network location, but all begins with the user.
So think for just a minute. What is the authoritative source for identity inside of the enterprise? Well, it’s Active Directory. Active Directory for years has been the authoritative source that when users come in they identify themselves, you authenticate, and then that’s where you express your policy about access and governance.
Well, we’ve cloud-optimized Active Directory with what we call Azure Active Directory. Now this gives you the capability to have your identities inside your organization but stretch that out also to the cloud.
Let me tell you a little bit about Azure Active Directory. Since we stood this up live we’ve authenticated more than 265 billion authentications since going live. In the time that it takes to brew an average cup of coffee, about two minutes, Azure Active Directory will have responded to more than a million authentication requests. And right now we’re just about to cross over over a half of a million unique domains appearing now inside our Azure Active Directory.
So as you think about your identity strategy and you think about how you’re going to govern access to your users’ devices, think about centering everything on the identity and everything in Active Directory and stretching that out to Azure Active Directory.
We’ve done the same thing with Configuration Manager. System Center Configuration Manager is the authoritative source, the authoritative solution for managing your PCs, far and away the most commonly used tool.
We’ve cloud optimized that as well, and we’ve cloud optimized that with what we call Windows Intune. And Windows Intune is our desktop and device management solution and protection delivered from the cloud.
And what we’ve done in the last year is we brought these things together so you can now think about Config Manager and Intune as this continuum. And what this now allows you to do is have one solution that allows you to manage all your PCs, your devices, whether those devices are corporate-owned devices or whether those are devices that your users are bringing in, but you now have the ability to express policy that governs access to users or to applications and data based on first the user’s identity in Active Directory, the device that they’re using, and then finally actually the network location so you can actually express policy that governs how things are accessed whether you’re on a corporate access or you’re not.
Let me tell you a little bit more about Windows Intune for a minute.
Windows Intune today is servicing more than 35,000 unique organizations around the world. It’s a service that we’ve built that is just growing like a weed right now. And the thing that is unique about it is we now deliver to you cloud-based mobile device management capabilities, integrated with the tools that you’re already using on-premises, which is Configuration Manager. And again what this provides is a simple, intuitive, consistent experience for your users across their PCs and their devices to access their applications and to access their data.
OK, so today what we’re announcing is the availability of some of these bits for download, OK? So we’re announcing Windows Server 2012 R2, System Center Configuration Manager 2012 R2, and Windows Intune, and literally just a couple of hours ago the bits for those on-premises products, so Server 2012 R2 and Configuration Manager 2012 R2 were posted for download.
So one of the things I hope you’re excited to do and you’ll do as soon as you get back to the hotel is download the bits and start looking at all the innovation that’s in there.
Now let me tell you a little bit about some of the changes we made in the last year.
In the last year, we made some changes, and some of you are probably like going, wow, Microsoft just last fall released new versions of Windows Server and of System Center, and here we are less than a year later and there’s preview bits that are out already, and these bits will actually be generally available, the release will be before the end of the year.
So in a year’s timeframe we’ve turned around entire releases of all of our major products, and how did we do that?
Well, a couple of things. First and foremost, one of our core principles is what we call cloud design or cloud first, and literally what we are able to do is we are able to prove and try things out in the cloud, battle test it, harden it, and then deliver it on-premises for you to use in your datacenter, and that has enabled us to significantly accelerate the pace at which we’re bringing innovation to you.
The other unique thing that we did here is the first time that we did common planning across Windows client, Windows Server and System Center, and in fact System Center and Windows Server, it was one vision document, one common set of milestones, everything is aligned.
What that means for you is you’re getting more end-to-end scenarios, complete scenarios that we’re delivering out of the box, higher quality, especially when you couple that with that cloud design principle. But we’re able to bring out more innovation to you at a more rapid pace, and you’re going to be able to advance and differentiate your businesses more.
Now, again what this has all been about and the whole design has been make it easy for my users to access their applications and their data across all their devices, and that’s what we want to show you right now.
So let’s give a hand to Adam. Adam is going to come up and show you some of the new innovations in these R2 releases. Let’s give him a hand. (Applause.)
ADAM HALL: Thanks, Brad.
All right, good morning, everybody. Welcome to TechEd.
So users bringing their own devices into the workplace is no longer just a trend, this is the reality that all of us in this room are facing today. And this is a really big challenge: how do we enable users to work on these devices that they bring in while remaining in control of the information that ends up on them?
So we’ve been working really hard on this challenge, and today I want to take you through the Microsoft solution based on Windows Server, System Center Configuration Manager, and Windows Intune that are going to help you meet this challenge. Let’s start.
So we’re going to start on a Surface device here, and this is just connected to the Internet. And I want to be able to get some work done on it.
So we’re going to start off by going to a SharePoint site, and we’re going to log in and enter our credentials here and see how we go. So let me just put my credentials in here.
All right, so I’ve gone to a site and I’ve entered my corporate credentials. But I get an access denied message. But not just any access denied message, this one is telling me that if I register my device using Workplace Join that I will get access to those resources.
Now, Workplace Join is a new capability in Windows Server 2012 R2 that enables me to make this device known to IT. It puts a certificate on the device and it registers a new device object in Active Directory.
So let’s go ahead and do a Workplace Join. To do this we go into the new modern settings control panel and under network there is a new workplace capability. Put my credentials in there and click join.
It’s going to ask me for my password, how do I want to register this device, enter my corporate credentials again, and you’ll see that I’ve now been prompted to do a phone factor call. We’ve integrated a multifactor authentication into the device registration service.
So I’m going to click continue and my phone over here will give me a call and ask me to do that multifactor authentication.
PHONE: Thank you for using Microsoft’s sign-in verification system. Press the # key to finish signing in. [# key pressed.] You have been signed in.
ADAM HALL: And so there as you can see, with no further interaction on my part, I have now Workplace Joined this PC.
So let’s go and try that SharePoint site again. Close that down, let’s go and try that again. Now it will ask me for my credentials again. It still doesn’t know who I am. So I’ll put that in, hit enter, and this time I get through to the SharePoint site. So by making this device known to IT they’re able to take the device itself into account in order to get access to that resource.
So that’s great, step one, I’ve got access to resources, which is fantastic, but now I want to get my corporate applications onto here.
Now, part of Windows 8.1 that we have, if we go back into the workplace, is I can now turn on management. And by turning on management this is going to connect it to the Windows Intune cloud-based management service. So this is going to connect this device up, it’s going to ask me who do I want to register with Intune. Going to use my same corporate credentials, click sign-in.
And this takes a little bit of time. It’s going to talk to the Windows Intune service, connect up. It’s going to ask me to accept the policies that IT wants to place on my device. I’m going to accept those, turn on, and the enrollment process is going to complete.
So in the background this will talk to Intune, it will pull down the configuration data and a few other bits and pieces.
So I’m going to move across to one that we’ve already done the enrollment on and has had a chance to accept all those policies down. So I’m going to move across to the second device now, and the first thing I’ll show you is that as part of doing that enrollment I’ve now got all of my Wi-Fi profiles, my VPN connections, and other data has come down so that my device is configured and I’m able to use it for work.
What we’ve also done is installed the company portal. And the company portal is about getting consistent access to not only my applications but also being able to manage my devices as well.
And so you can see applications on my screen. These are a combination of line-of-business apps that can be side-loaded, as well as deep links into the public application stores. You can see the devices that I have as part of this enrollment, so I can manage my own devices, and then some contact information in the event that I need some help from the help desk.
So we’re going to go ahead and install an application. It’s going to give me some information about this. And this is a line of business application that’s going to come down from Intune and be side-loaded onto the device.
I click install, you’ll see up in the top right-hand corner there’s a little flag indicating that there’s a process going on, and after that has now downloaded it’s going to install, and it gives me a notification to say this application is now installed on the device.
Now, if I go back to the Start screen, scroll down, and there is my expenses application. I can pin that to the Start and easily get my work done as simple as that.
But it’s not just about applications, it’s also about data as well.
So let’s go back into the company portal and take a look. As I scroll up from the bottom you can see I have a new button for work folders. And work folders is a new synchronization service that is part of Windows Server 2012 R2. This enables me to synchronize files from a corporate file server out to all of my devices.
So let’s go ahead and configure that. It’s going to take me across to manage work folders, and I’m going to be able to see that work folder and get that data down. It’s going to ask me for my email address, which I’ll type in, click next, and it’s now going off and discovered where my work folder sync share is. It’s going to tell me where it’s going to install those files. I could choose another drive. I click next and again it’s going to ask me to accept the policies that may be applied to my device as a result of putting corporate data onto this. This includes things like PIN codes but also the encryption of the data as it comes down.
I’ll choose to set up work folders. That’s now going to finish and the files are going to synchronize now down in the background. You can see that they’re coming down.
So if I bring up a couple of explorer windows here, do a refresh, you’ll see I now have a work folders icon there and there is the data.
Now, the top window that you’re looking at is the data that’s local on my device. Down at the bottom is the data that’s at the back-end server, because we’re do a synchronization and show you how this works.
So I have a document here that is local on my desktop, and I’m going to open that.
And one of the things that we’ve done with work folders is integrate it with Dynamic Access Control and rights management. And so we have a policy in our organization that says if we have a document that has the phrase v-next in it, that it will automatically rights protect this document.
So I’m going to come in, type v-next, I’m going to save this into my work folders, hit save. All right, let’s save that. And you’ll see that it’s in the top but you can see it also turned up in the bottom window as well.
So I’m just going to select this, and it might be a little small on the screen but the size of that file is 1,503 kilobytes. In the background Dynamic Access Control is scanning that document, classifying it, and performing rights management against it.
Now, just while we wait for that to finish, most of you are probably wondering how this works with SkyDrive Pro. SkyDrive Pro is about Office and getting access to SharePoint data and taking that offline. It enables you to collaborate with people and share information. Work folders is just my data. It’s coming off a file server and synchronizing down to all my devices.
So whilst we’ve been doing that, that file has now been rights protected. I can double-click on it. It’s going to configure it for rights management, and we’ll see that the Dynamic Access Control has kicked in. Even though the file originated on my device, it’s synchronized up, was rights protected, and then synchronized back down again. And there it is, the document is now rights protected.
So that’s great. We’ve got access to resources. We’ve synchronized our data down. I’ve got my applications. Let’s fast forward a little bit and think about what happens if I lose this device or what happens if it’s stolen or even if I’m just finished with it and I want to pass it off down to one of my children.
What we can do is come back to the Start screen, go back into those workplace settings. We can leave the device enrollment, which is going to remove that certificate and break the connection back. And I can also turn off management. And by turning that off it’s going to remove all of the corporate data but leave my personal data behind.
So if I go back to the desktop now what we’ll see — and on the right-hand side you’ll see it pop in here in a sec — if I try and open that file that I just had access to, I’m going to get an access denied message. And if I go back to the Start screen, you’ll see that that application that I had installed has now been uninstalled. So it’s removed all that corporate data, it’s rendered the data inaccessible through work folders, and I’ve been able to get access to resources, get my applications and all my files as well.
So that’s the end of my demo. If you want to know more about this, come along to the PCIC foundation session straight after the keynote.
Thank you very much. (Applause.)
BRAD ANDERSON: Hey, thanks a lot, Adam.
Just to kind of summarize again what you saw there, so a consistent experience now for your users across all their devices. Now, we demonstrated that on Windows devices but that same company portal application you can get down through the Apple store, through the Android store, so that consistent portal experience which allows all of you to brand IT out to your users and give them that seamless experience, seamless intelligent access to the applications across all of their devices, again all governed by Active Directory, the user’s identity in Active Directory, and then also being able to express policy on the device itself.
So, for example, you could have a policy that says when a user comes in and they’re on a Windows device go ahead and deploy that application down and run that in a distributed model. But you could have the same policy that says, hey, if they come in on an Apple device I automatically want to have that launched from a thin client server, and then automatically the application will come up running in the datacenter in a thin client mode, but to the user it’s the same experience and you as the IT professionals express the policy that governs access, again based on user, the device, and then the network location.
So we’re really now enabling your users to be productive and empowering your users to be productive across all their devices, unifying your environment, so no longer having to have a solution for PC management, another solution for your anti-malware, another solution for mobile device management. All of that comes together with this combination of System Center and Windows Intune and allows you to fully take advantage of one common infrastructure that’s cloud enabled to do all of that, and that really is the work that we’ve been doing.
All right, so we talked about, we began here with enabling your users across all their devices. Let’s now transition and let’s talk a little bit about developers and the work that we’re doing to help develop the most rich and the best applications to run on these devices.
Now as developers we all want a few things. We want fast time to market. We’re always under deadlines. We want to make sure that we get the work that we’re doing out into the hands of our users as quickly as possible.
We want to use the latest and greatest technology that makes these applications the best, but we also want to make sure that we can meet the scale and the needs that the applications are going to have.
And then finally, we want to do everything we can from an organizational perspective for readiness. And that specifically is talking about the relationship and the coordination across development and operations.
So as we’ve been building out the work that we’ve been doing in Visual Studio and in Windows Azure, we have focused on enabling these things and we really have focused on these four points:
How do we enable a rapid lifecycle so from the time that you start building to the time that you actually are deployed, up and running and users are using it is shortened;
How do we do this in a way that enables you to build the application but have it consumed on all the devices your users are going to use;
Provide access and get value out of all the data that our users want to use;
And then finally, do it all in a way that we ensure the security and the protection of the company assets, because that’s first and foremost one of the things that we have to make sure that we do.
This has been where we’ve been focused at as we’ve been building out Windows Azure and Visual Studio.
Now let’s talk for just a minute about Windows Azure, and I want to talk for a minute about how strategic your decision on who your public cloud partner is going to be.
Now, as you think about the public cloud you are looking for a partner that’s going to give you global availability but has a guaranteed service level agreement and backs that up financially, and is willing to make the kinds of investments that are required to literally build out a global public cloud, and that’s what we have been doing with Azure.
So in terms of right here are where we have our datacenters. And so we’re building out datacenters around the world, and, in fact, we were the first multinational organization to announce through a partnership that we would have public cloud capabilities in China, in mainland China.
Why is that important? Well, as we go forward every one of us is going to want to make sure that we are conducting business in China because it’s such an opportunity, and Windows Azure was the first to offer that as a part of our global footprint.
There’s going to be times where you’re going to want to actually talk to someone local. You’re going to want to have support from local people. So we actually have global support around the world in your native languages so that when you have that need you have accessibility. Local account teams and finally the ability to actually do your billing in the currency that it should be done in.
So Windows Azure, you know, we are investing literally billions of dollars every year in building out our public cloud. We literally are deploying hundreds and hundreds of thousands of servers every year into our public cloud, and you can see some of the numbers there. Over a billion customers and over 20 million unique businesses are using our public cloud.
So let’s take a look at how some of the organizations are actually using this, and one of the most common design points that we see are organizations that are running in a hybrid mode using Azure for some capabilities but then marrying that with applications and data that they have in their own datacenters.
And that certainly is the case with a U.K.-based organization called Aviva. Let’s take a look at how they’re using Azure.
(Video segment.)
BRAD ANDERSON: What a wonderful example of a hybrid cloud scenario. You know, with Aviva they have all of the quotes, all of their IP sits inside of their databases and their firewalls, but they’re using Azure to actually bring in these requests that come in from these applications.
And I just love this application. You know, I would love to be able to have a personalized quote where my insurance could be as much as 20 percent less based upon my driving skill.
You know, so the way it works is you download this application and it actually tracks your driving for 200 miles. You upload that up to Azure. Then you literally get a score and you get a personalized quote now that is based on the reality of how you’re driving.
I love this as a technologist but I also love this as a father of three daughters who are driving. And I would just love to get that score because I think the way it would appear is my wife and I would have great scores, but my three daughters who are driving, I’d actually have that proof that I’m a better driver than them.
OK, so let’s now dive a little bit deeper. Let’s take a look at the innovation that we’re doing in Azure and some of the things that we’re doing in Visual Studio, and let’s start with Mark Russinovich walking us through some of the things in Azure. Let’s give him a hand. (Applause.)
MARK RUSSINOVICH: Buenas dias.
I’m going to talk for the next few minutes about some of the changes we’ve made both in terms of features as well as the offerings and MSDN offerings in Windows Azure to make Azure a no-brainer environment or your devtest scenarios.
And to demonstrate this I’ve created a little sample application. You want to imagine that this application is one that we’re going to run on-premise, but we’re going to take advantage of the fact that we can spin up VMs very quickly in the cloud and do our devtest there.
This application consists of three virtual machines. It’s kind of the standard website with a database back-end. You can see here’s the IIS website right here. Here’s the SQL Server back-end. And then finally I’ve got them both connected to an Active Directory domain controller sitting up in the cloud so that I can use Active Directory authentication to get into that front-end.
One of the considerations when we’re doing devtest with assets that we’re going to be deploying on-premise is that we want to make sure those VMs are secure when we put them up into the cloud, that our database server, for example, or our domain controller is not sitting there on the wide open Internet.
And one of the features we’ve added to Windows Azure to make it easy for developers to get that secure connection up into the cloud is through the use of virtual networks.
When I click down here on the virtual networks tab you can see that I’ve created a devtest virtual network. And the dashboard shows me that I’ve added those three virtual machines to the virtual network.
Part of virtual networks is that there’s a private IP address overlay that you apply onto those virtual machines up into the cloud, and so they each get IP addresses. You can see that the IIS front-end here is 10.0.1.5.
Now, I’m not connected to that virtual network from this dev box here. So if I try to ping that, you’ll see that those pings time out and that machine is just not accessible unless I’m part of that virtual network.
Joining that virtual network from this machine is as easy as downloading one of these VPN client packages here and installing it. And once I do that, the virtual network will show up in my connections. And I can easily connect to that virtual network here. It’s going to authenticate. It just takes a second. If I go back to the ping window, you’ll see now that I’m joined to that virtual network and now those machines that are up in the cloud are accessible from my dev box.
So now I’m ready to start my devtest session, which I’ll start by publishing this little sample app up into the cloud. And that’s as easy as just following the standard publish-to-website procedure that you would do for any Web deployment. The only difference here is you can see in the connection settings for the Web deploy is that I pointed at that 10.0.1.5 IP address of that website front-end.
When I press publish, it’s going to package it up and in about a second that’s deployed up into the cloud and that opens up right there.
Oh, and then look at that, I’ve added a few events to my upcoming events, and I added those because somebody backstage said, “Mark, you really need to go to those sessions,” and I thought, wow, you must have seen those at TechEd U.S. and thought they were really good, and they said, “No, I’m the speaker manager, I just need to make sure you’re there.”
Well, and then the next thing I’m going to show you is how to do some actual debugging too from my dev box up into the cloud, and how easy that is.
For example, if I want to set a break point right here on this line, which is the line that starts to create that code that’s displayed at the bottom of that Web page, I set the break point there and then I just go to attach to process, which is the same steps that I’d follow whether I’m debugging my own dev box or a remote system. In this case it’s that 10.0.1.5. I find the IIS hosting process, w3wp, say attach, and now I’ve got a break point set on that website up in the cloud. When I press F5 I’m going to hit that break point, and now I can do my debugging and stepping from my dev box up into the cloud. I’m going to press F5.
So now I’m done with my debugging. Let me stop that. I’m ready to shut down those VMs. I’ll come back to the VM list and select the IIS VM and press the shutdown button down here.
Now let’s switch back to the slides and I’ll talk a little bit about some of the changes we’ve made to billing and as well as the MSDN offers we’ve got that are aimed specifically at devtest.
And when I shut down that VM just now, I stopped getting billed for the compute hours that that VM had been using. And that’s a bit of a change from what we had up until about three weeks ago where even if I stopped a VM you’d still get charged for it. So now those VMs are shut down, I shut down my devtest environment, no more charges are accrued.
Not only that but up until about three weeks ago we had per-hour billing, which meant that that little few minutes of devtest that I did would have been three VMs, three VM hours’ worth of charges against me. But now with per-minute billing I literally get charged just for those few minutes that I had those VMs active.
Further, because I’m an MSDN subscriber I’m getting charged a special flat rate for compute versus the standard rates that you’ll see Azure users typically get charged, six cents an hour for those VMs versus up to $2.11 an hour for BizTalk Enterprise if I happen to be doing devtest scenarios with BizTalk Enterprise.
So a flat six cents an hour means that those few minutes that I spend just now doing that debugging probably cost me a little over a penny. And not just that, but it was probably free because with MSDN — and how many of you are MSDN subscribers? Let’s see you raise your hand. So quite a few of you. You automatically get monthly credits now, ranging between $50 and $150, depending on your subscription level, towards Windows Azure resource usage for VMs, databases, websites, whatever. So I could actually run those three VMs as my devtest environment for 12 hours a day for every day of the week for a full month, and that $50 level right there at the basic would probably cover all my expenses. So all my devtest would effectively be free just as part of being an MSDN subscriber.
And I’ve got one more thing to mention about MSDN, and that is if you’re an MSDN subscriber as of June 1st, and you go and activate Azure subscription and deploy any resource at all, a VM for a few minutes is all you have to do, you’re automatically entered to win a cool Aston Martin car like the one that you saw Brad driving, as well. So I’d encourage you to go try that out and look at using Windows Azure as your devtest environment.
And with that, I’m going to turn it over to Brian Keller. He’s going to talk a little bit more about some of the enhancements that we’ve got in Visual Studio for you as well. So, Brian? (Applause.)
BRIAN KELLER: You know, I love that demo. It’s so easy to get started with IaaS. I’ve already started to move a lot of my devtest VMs. And within MSDN it’s like having your own personal datacenter standing by for you.
You know, a year ago when we were in Amsterdam for TechEd Europe we were getting ready to release Visual Studio 2012. And thanks to many of you, you’ve helped us make that the fastest-adopted release of Visual Studio ever, over 4 million people worldwide using Visual Studio 2012.
We’ve also been shipping continuous updates for Visual Studio and Team Foundation Server. So as recent as April we shipped update two, and within the first 30 days more than 50 percent of you were already using update two. So that’s a great way for us to get new features and bug fixes out to you on an ongoing basis.
But we’re not done. Later this year, we’re going to be releasing Visual Studio 2013 and Team Foundation Server 2013. In fact, tomorrow, when the BUILD conference kicks off, you’ll be able to download a preview of these releases and you’ll be able to start using that for real applications. It will have a go live license attached to it.
So today, I’d like to show you a few demonstrations related to what my team works on, which is application lifecycle management tooling. Our team’s mission is to give you great tooling to help you plan, dev, test, release, and of course operate those applications that you’re building.
So I’ll switch over to my demo machine here, and for those of you that are already using TFS 2012, you’ve probably been familiar with the agile planning capabilities we introduced. So I can start to track my team’s product backlog, I can break down this work into individual iterations, I can make sure that none of my team members are overworked, and finally I can start to track this work using our modern day implementation of post-it notes on a whiteboard. So now my entire team can see what everybody else is working on.
And we’ve heard from you that this works great for small teams of maybe six to 10 people, but what a lot of organizations are struggling with is how do you take agile and scale that to the enterprise.
One of the things we’re doing in 2013 is introducing agile portfolio management. So here I’m looking at what my team calls features, which are larger units of work, which are contributed to by multiple teams across my organization. I can start to pivot down and see how all the individual teams are contributing to work across that organization.
Another thing we’re introducing with 2013 is called Team Rooms. With Team Rooms this gives me a nice place to have a real-time collaborative experience with other people on my team. So, of course, we can start to chat with other people on my team, but we’re also giving you all of the interesting events out of Team Foundation Server that tell me things like a build just broke or a new feature is ready to be tested.
I can also go back in time and catch up on maybe what happened yesterday. So if I was out of the office, this is a good way for me to come up to speed on what my team was working on.
Now, of course, as software developers our core mission at the end of the day is to ship high-quality working software. And so if I switch over to Visual Studio 2013 I’ll show you a new feature that we call Code Lens.
A lot of you probably play “Halo” or if you look at what a fighter jet pilot has, they have this really nice heads-up display that gives you substantial awareness about everything that’s happening around you. Code Lens is just like that for software programmers.
So here you can see that right from this block of code I get this nice ambient indicator. It tells me what other blocks of code are referencing this code. I can also see that I have some unit tests that exercise this code. One of those tests is failing, so I could make the fix and easily rerun those tests straight from within here.
From Team Foundation Server we also see all of the other people who were working on this block of code recently, and I can open up one of those blocks of code and see exactly what that change was and why that change was made. I see that it was to fix this particular bug.
So we think that Code Lens is going to change the way that you develop software.
So let’s say that I’ve gotten to the point that I’m ready to make sure that the new feature I was just working on is going to scale to the thousands of customers who are going to access my website. Visual Studio has for several years shipped great Web and load testing tools right within Visual Studio. But one of challenges is that every time you run a load test you have to set up a set of machines that are going to simulate those virtual users. So you have to set up those machines and maintain them, and you probably only use them a few times each month. We thought that was a great example of a scenario that we could move to take advantage of the elastic scale offered by the Windows Azure platform.
Now what I can do is decide to run this load test using Team Foundation Service hosted in the cloud. When I run this test it’s going to provision all of the resources I need automatically for me in the cloud, and then simulate those thousands of users.
This just takes a few seconds but I ran this other test earlier and this shows you how your application is performing under load. So that gives you the confidence that you need.
So let’s say that this application is now ready to be pushed out to production. One of the challenges with managing lots of releases in an ongoing fashion is that you’re shipping more frequently, which means that you’re pushing multiple releases out to dev, test and production. And release management is a real challenge here.
I’m pleased to announce that earlier this month we announced that we’re acquiring a company called InRelease. InRelease is a great partner of ours which for years has provided a nice release management tool built right on top of Team Foundation Server, so it was a natural integration for us.
You can see that I can manage the way that my code moves from dev into QA into production and so on, and I also get a nice approvals dashboard here as well. So that release that I just finished testing a minute ago, I can say that I want to approve this, load test passed, and now this is going to kick off a workflow letting the operations team know that this release is ready to push to production.
So those are just a few of my favorite capabilities. I hope some of you will join me for my foundational session. And starting tomorrow I hope you’ll download Visual Studio 2013 Preview, start building some great applications, give us some feedback, and help us make the final release later this year the best-ever release of Visual Studio. Thank you. (Applause.)
BRAD ANDERSON: All right, thanks, Brian.
Hey, wonderful innovations coming out of Azure, wonderful innovations coming out of Visual Studio. You know, we made reference to the IaaS capabilities of Windows Azure. Just let me give you an idea of how fast this is growing. You know, we have more than a thousand customers every day now signing up and actually deploying things on Azure, more than a quarter of a million unique organizations taking advantage of it. And just kind of one interesting measure is now across Azure and our Azure-related services that is now generating more than a billion dollars of revenue for the company. So really incredibly fast growing, and we’re going to talk a little bit more about that as we go through the rest of the morning.
OK, so we talked about the work that we’re doing to enable your users across all their devices, then the work and the innovation that we’re doing to help you build the most rich and the best applications possible to be consumed on those devices. Now let’s talk about data.
All these applications create data. All the machines that we’re using are creating data. There’s an absolute explosion of data happening around the world, and one could argue this is the era of data. And the organizations that enable their users to really pull those insights out of all the data that’s being created across all the different places around the world are really going to be the organizations that help to differentiate themselves and to differentiate their organizations themselves.
So that explosion, data coming from various places around the world, an interesting data point, IDC is telling us that right now data is growing at a 60 percent annual growth rate. A 60 percent annual growth rate, to put that in perspective, that tells us that in five years we’re going to have 10x the amount of data that exists today.
And again we know the organizations that put that data and allow the individuals to really pull out those insights are going to be the ones who are going to be the most differentiated.
So in terms of the work that we’ve been doing, we have been really focused on how do we make access to all that data simple, easy, using the tools that you’re already familiar with like Excel and SharePoint, how do we do that in a way where we actually present the ability to do powerful analytics across all the organization, from the CXOs down to the product manager that’s running a marketing campaign and wants to real-time understand what’s happening.
And that actually is one of the biggest trends here is that users’ expectations are increasing dramatically, OK, and so this complete data platform and thinking about data from the OLTP database all the way up through the data warehousing and up to the visualization of that data in a way that makes it easy and simple to get access to those insights is where we have been focused at.
Now, again let’s take a look at how an organization is using some of this innovation, and we’re going to talk for a minute about an organization called bwin, bwin an online gaming organization. Let’s take a look at how they’re using some of the SQL capabilities to advance their business.
(Video segment.)
BRAD ANDERSON: Again what an amazing story. You know, bwin has hundreds of thousands of individuals every day placing millions of bets or playing millions of — got millions of bets, and what they saw is using the exact same hardware, no change to the hardware at all, they saw the number of transactions that they could process go from 15,000 per second to up over 250,000 per second with the same hardware, OK? We’re seeing increases of 16, 17, 20x on the same hardware for the number of transactions. To kind of put it in perspective, that means that they’re going to be able to service 20 times the number of customers that they could before using this capability called Hekaton on the same hardware, and in so doing save well over $100,000.
That’s the kind of innovation you can do in software, that’s the innovation that when we talk about cloud first and these cloud design principles, taking things that we are literally learning in the cloud and bringing it to run inside of your datacenters.
So let’s now dive a little bit deeper on this, and we’re going to invite Eron Kelly to come out, and he’s going to walk you through some of the innovations we’ve been doing in SQL. Give him a hand. (Applause.)
ERON KELLY: Thanks a lot, Brad. It’s great to be here at TechEd Europe representing the data platform.
Now, data-driven decision-making has always been a key priority for our customers, but recently there’s been a whole lot of interest and frankly hype around things like big data. But why is that? In the U.S. alone over the last couple of months there’s been over 28 big data conferences.
Well, there’s a couple of trends that are driving it. One, the rise of the cloud. In the past, these big data number-crunching exercises were only really available to elite universities or large governments, but now with the public cloud you have access to effectively unlimited amounts of storage and compute to do your own big data projects.
The cost of DRAM has dropped dramatically over the last 10 years, which now makes it affordable to put your most-used data in memory and take advantage of the higher read/write throughput that you get from RAM versus disk.
And then, of course, an explosion of data has occurred. Brad talked a lot about some of these statistics. My favorite is in the last two years we’ve generated more data than in the history of mankind. Think about that, think about that amount of data.
And it’s huge new volumes of data. It’s driven by smartphones. It’s driven by the social sphere, things like Twitter and Facebook. It’s diversification by sensors on devices, from the assembly line to even your automobile.
And this is new data, it’s data that we’ve never had access to before. And within that data there’s new insights that we can unlock because those new insights will drive changes for our business, because ultimately data changes everything. It changes the products that you offer your customers based on what they’ve already purchased. That next logical product is a key focus in the industry today.
It changes the inventory levels you maintain. Too much inventory, it’s going to create a drag on profits. Not enough inventory, you’ll have stock-outs or customer satisfaction issues.
We can even use data to change the structure and format and content of this keynote. A little bit more on that later.
So with this influx of data it does create a burden on IT: how do I manage all this new data?
But there’s also an opportunity, and that opportunity is for you to be able to find that hidden treasure chest of insight within the sea of big data, unlock those insights and deliver them to your organization.
And to help you do that I’m very excited today to announce the availability of the technical preview of SQL Server 2014. (Applause.) Yes, there’s a little applause, there we go, good, good, good. (Applause.) It’s now available for download, and you can start to take advantage of this great technology.
Now, there’s too much to talk about to cover the full surface area of SQL Server 2014 but let me highlight a couple of points.
First off, it’s really the best cloud database we’ve ever built. We actually bring the cloud right into SQL Server Management Studio and allow you to configure backup with simple T-SQL commands right into Azure.
Even better, we allow you to right-click on your database and create always-on secondaries in Azure for DR and geo-redundancy; very powerful, very easy.
In the area of analytics we’ve made improvements to our in-memory columnstore index where now it’s actually an in-memory updatable columnstore index. That’s right, so now in your data warehousing solutions it allows you to read/write directly to that columnstore so you can have near-real time data analytics in your data warehousing solutions.
Now, as I talked about in-memory I have to talk about the new in-memory OLTP engine. Prior to this, we referred to it with the codename Hekaton. As you saw in the video, it’s wicked fast.
And there’s a bunch of great new technologies that make it that quick, and I want to highlight three of them.
First off, SQL Server will identify within your application which tables are running hot and recommend to push those into memory. So when they do go into memory you take advantage of much higher throughput of read/writes in memory versus disk.
In addition to that, we actually store the data in a different data structure. So in the past we stored data on a page, and when a process needed to do a read/write on that page it would latch the page, it would do a read/write, and then it would release the page. Well, subsequent processes that also wanted to access data on that page had to wait. What ends up happening is your CPU slows down because it’s waiting for those pages to become unlatched.
With our in-memory OLTP engine we now store the data in individual rows, which allows that CPU to run fast across all the rows simultaneously, so you get much, much better performance.
Now, the third thing we do is SQL Server 2014 will identify which stored procedures can also benefit from some performance improvements. It will identify them, and that will allow you to compile those in machine code. When you compile those stored procedures and run them in machine code, it’s blazing fast and really takes off.
So those three things together create huge performance benefits that you saw bwin experience in the video.
But the best part of our in-memory OLTP engine is that it’s in the box, it’s part of SQL Server 2014. It’s not a new product to buy, it’s not an add-on, it’s not a new product to learn, it’s just part of the package.
And because it allows you to optimize which tables you move into memory, you can use your existing hardware. You don’t have go to out and buy a big new box with tons and tons of RAM and move your whole database up into memory in order to get performance gains. With SQL Server 2014 you can do it all right in the box and right with your existing hardware. So very, very powerful new technology.
But SQL Server 2014 is the foundation of our data platform but it’s not the only thing we’re doing. And we’re actually using the data science process to provide a guide for where we want to do future engineering investments, because as Brad mentioned earlier, we really believe that the differentiation here is taking the users in business who are closest to the business problems, bring the data to them so they can drive insights much faster and much richer.
And so it starts with making it easy for those users to find data, combine it with other data they may have to start to drive insights, and we do that all within Excel with a great new add-on with the codename “Data Explorer.”
Now, once that data is in Excel you can do great analysis with a powerful tool that’s very familiar to most of your users, Excel, with Power View. That analysis allows you to form new theories, come out with new insights. And then once you’ve had that, you can hand it over to IT to operationalize so you can take action across your whole organization.
So the idea here of going from data to insight is all focused on making it easy for that user to be connected to the data, easy to find the data, a powerful tool that’s familiar to do the analysis and then deploying across a complete data platform.
But rather than just talking about it, let’s go ahead and show it to you. So I’m going to invite Dandy Weyn out here from my team to do a little demonstration of these great BI technologies. Welcome, Dandy. (Applause.)
DANDY WEYN: Hey, Eron.
ERON KELLY: Hey, Dandy.
DANDY WEYN: How’s it going?
ERON KELLY: Good.
DANDY WEYN: Let me show you what I’ve got with me.
ERON KELLY: Excellent. So what’s that? You didn’t get the dress code. It’s blue shirt day. What’s up with the soccer jersey?
DANDY WEYN: What do you mean? This is football. I mean, I thought bwin, great SQL Server customer, bring on Madrid. Any Real Madrid fans in the audience? (Cheers, applause.) There’s a few.
ERON KELLY: All right, show us these great BI tools.
DANDY WEYN: So yeah, let’s have a look at some of those data visualization capabilities. And as we’re on the subject of football, what I did here, this is Power View, which is a powerful visualization, mapping, reporting capability that we have sitting directly in Excel. And so what I have here is all the football clubs in Europe that are represented here within the audience.
I’m pretty sure that if we go through this entire list, we’ll have folks from Copenhagen, we have some. What about my Dutch friends on IAX and such? Anyone? (Applause.) How about Bayern in Munich? (Cheers, applause.) See, there we go.
ERON KELLY: So basically I could use this next time I’m in Europe traveling as I go to a customer visit, I could use this tool to figure out where the great football matches are.
DANDY WEYN: With this crowd I bet you could even get your tickets.
ERON KELLY: Wow, that would be great.
But here we are, this is great, this is great for football, but we’re here at TechEd. What kind of insights can you give me about TechEd?
DANDY WEYN: So what we have, we have data insights on everyone in this room. So let’s have a look at that.
So what I have here is a spreadsheet with all the registration information of everyone in the audience, everyone at the event basically. So I can see many countries and so on. Now, of course, this doesn’t really visualize quite nicely.
ERON KELLY: It’s hard to get insight from that.
DANDY WEYN: So what we’re going to do is we’re actually going to go to that powerful GeoFlow capability that we have there, and we’re going to create a map that’s going to visualize the audience.
ERON KELLY: So what Dandy is doing is launching a new feature called GeoFlow, which basically interrogates the Excel spreadsheet, finds the different data elements in there, and makes it really easy for him to then map it in 3-D visualization.
DANDY WEYN: And there we got, I’ve got my audience. We’re going to map this audience. We can really zoom around, zoom into things, go all over Europe.
ERON KELLY: These are basically all the folks, where people are from in TechEd.
DANDY WEYN: That’s exactly right.
ERON KELLY: Awesome. Really cool. There’s even some folks from the states.
DANDY WEYN: And I could map the labels to it so we can actually see which countries people are from.
Now, of course, I want to see much more on this data, right? I want to actually find out which city someone is here for. And so we can just easily represent and map those cities and actually see the city representation.
Now, which city you think has the highest representation of attendees?
ERON KELLY: Well, looking at the map here I would say Madrid has got a pretty high number, but it looks like we’ve got a lot of people from the Scandinavian countries, so from it looks like — is that Oslo up there? Who’s from Oslo? Are there any folks from Oslo out here? (Cheers, applause.) OK, we’ve got a little bit. There’s Helsinki.
DANDY WEYN: So we’ve got Helsinki, we’ve got Stockholm, Swedish guys are here again. Yeah, big audience, always very well attended.
ERON KELLY: A lot of folks from Sweden and North America, too.
DANDY WEYN: And so we’ve got Stockholm, we’ve got right there, we’ve got London.
ERON KELLY: London. So it looks like there’s the most attendees from London here at TechEd North America. Is that right, London, does that make sense? Are there folks from London here? (Laughter.) There’s a few, there’s a few. Is that right?
DANDY WEYN: Is that what you think? Now let’s have a look at this one, OK? Have a look at this one right there.
ERON KELLY: Oh, of course. (Laughter.)
DANDY WEYN: Look at that bar. It’s Redmond.
ERON KELLY: Redmond. Of course, of course.
DANDY WEYN: And you know we like to come to TechEd to talk about our technologies and our product capabilities, and really bring you the best and provide those data insights that we’re looking for here, right?
ERON KELLY: No, that makes sense, Redmond. Redmond does make sense as the most attended city.
DANDY WEYN: Now, of course, what I really care about is as we drill down into that audience, and as you mention it, we can really drill down into individuals and find out what they’re here for and what they’re looking at. What I actually can do is, rather than mapping that city, I can map that to the job titles. And look at this, I can really drill down into each of those cities and start navigating.
So if I look at Stockholm, for example, I see I have consultants. I see I have IT managers. I can go over to Oslo and we can map there, and we have some senior consultants. So really nice, everyone in the IT industry is actually represented.
ERON KELLY: So you can really see who is in the audience and what they’re focused on.
DANDY WEYN: So what I really care about is really drill down into using those visualizations and finding out what are some unique people that pop out here in this audience. And we’ve got this little place here in Sweden that’s called Ludvika. And look at that, we have a database guru in the audience.
ERON KELLY: A database guru.
DANDY WEYN: Isn’t that exciting, a database guru right there.
ERON KELLY: Our database guys.
DANDY WEYN: So what’s really nice about this is actually I think that database guru is such a great guy, so what I actually brought in is I brought in a Seattle Sounders Football Club shirt fully customized with Database Guru. So that’s the capabilities we have right there, right? (Applause.)
ERON KELLY: So this is a real guy with a real title on his badge and it says “database guru”. Wherever you are in the audience, come to the booth and that jersey is yours. Congratulations.
So with GeoFlow it looks like you’re able to see big trends in the data, but then you can really drill down into some specific details.
DANDY WEYN: And so one of the things that I’m also interested in as I’m a speaker at this event is, who is actually coming to my session? So what I can do is I can go back to that Power View capability and really drill down into the audience. So what I have here is a report that’s going to show me the representation for people that used that Session Builder’s tool and signed up for a session. So I can drill down into each of the individual sessions. Look at the representation across my database and BI track, so I can see very strong representation of Germany, United Kingdom, again Sweden. And I can also see that for each of the industries that is being represented.
So if I look at that, of course, when I take a different session, and let’s say the SQL Server 2014 In Memory Session, you see that the audience will change based on the industries, based on the audience’s preference on what sessions they come to. So if I look at my session specifically here, I’ll see United Kingdom, Sweden, and actually Belgium. So that really makes me not the only Belgian in that session room.
ERON KELLY: Yes, that’s great.
DANDY WEYN: So thanks for joining me, guys.
ERON KELLY: Great. So, Dandy, so far we’ve just seen insights on what I would call normal relational or structured data. What about unstructured data, Hadoop?
DANDY WEYN: So you mentioned Data Explorer. So one of the capabilities I have is with that Data Explorer there, I can actually retrieve data from any kind of data source you can think of. You could go to the traditional databases, but you can also go to Windows Azure MarketPlace and really combine your data with datasets you want to pull in, vendor-related information and so on. What else I can do is I can go to Windows Azure HD Insight, which is the Microsoft distribution of Hadoop, running on Windows Azure.
ERON KELLY: So what Dandy is doing right now is he’s basically using Data Explorer to connect Excel to Azure, and specifically a Hadoop cluster on Azure, where we’ve been collecting a bunch of tweets with the hashtag of #TechEd over the last couple of weeks.
DANDY WEYN: And so I’ve got those tweets actually right here. So we can see all those tweets that’s just right there.
ERON KELLY: What are you going to do with that?
DANDY WEYN: Well, we want to bring some structure in, right? So what I can do is I can actually grab this, and say let’s transform these tweets into a JSON document. I can then expand those records and really look for what I’m interested in. So I’m interested in when that was created. I’m also interested in the user that actually set up the tweet. That data will get filtered.
So what I can do next is I can say, hey, how about we show the screen name and we show the location. Now the cool thing about this is these tweets were created at a certain time there. What I can do is I can take that and add an additional dimension to GeoFlow, which is the fourth dimension, the dimension of time, and really map out how things evolved over time, and provide those data insights.
ERON KELLY: So with Data Explorer you really created a powerful query to parse the data that was sitting up there in your Hadoop cluster.
DANDY WEYN: Absolutely.
ERON KELLY: And now you’re going to go do a visualization in GeoFlow.
DANDY WEYN: Yes. I’m going to go back to GeoFlow and actually present that information on a heat map. So what you see here is I have all the tweets and the heat map reflecting those tweets from in the audience. And so what I can do now is since I have that additional dimension of time, I can just hit the play button and really show how that evolves over time.
Now this is Twitter. This is going to go fast. So you can see all that activity, these tweets. People from the Netherlands tweeting. U.K., and look at Spain, there’s something happening right there.
ERON KELLY: What happened down there?
DANDY WEYN: Well, let’s drill down into that one specifically. So what we’re going to see as we drill down into Spain, which seems to be very hot right in the center of Madrid, we can really drill down and say, look at that, we’re at the Convention Center. Welcome, everyone. This is cool.
ERON KELLY: Thanks, Dandy. That is some powerful BI. (Applause.)
DANDY WEYN: Thank you.
ERON KELLY: So as you can see, we used data to customize this keynote, and certainly made it a memorable experience for one member of the audience, the database guru. How are you going to use data and the tools to change your business?
Muchas gracias. (Applause.)
BRAD ANDERSON: All right. You know, every time I get a chance to see one of these demonstrations of GeoFlow and these visualizations, it’s just amazing to me. This is just like Eron had mentioned, this is all about taking all of the data, all of the data that you have in your organization, and your partners, and your competitors from the Web, from the cloud, and be able to bring that together in a way that is powerful and allows you to really pull those insights out to differentiate yourselves, and differentiate your organizations. I hope you get a chance to download the new SQL, SQL 2014, take a look at how powerful that is inside of SharePoint and inside of Excel.
So we talked about enabling users on their devices, creating great apps, gaining insights from data, and let’s now transition to transforming the datacenter. You talk about the promise of the cloud for a minute. The cloud promises things like agility. It reduces costs, and it reduces complexity, and it really allows you to do things that respond to the business quickly.
And this is one of the most unique things that I think Microsoft brings to the table as a partner of yours. At Microsoft, we operate more than 200 cloud services that are serving hundreds of millions of users around the world. We’ve literally deployed servers into our datacenters in the hundreds of thousands at a time.
Why is that relevant? Why is that important? Well, you’d better believe we learn, and we learn an incredible amount as we operate these incredible cloud services, and as we operate Azure. And our commitment to you, and what I think really differentiates us more than any organization or any other partner you could partner with here, is we are committed to bringing every single thing that we learn as we operate these services in the public cloud to you to run in your datacenters.
So innovation that we’re doing in the hardware, innovation that we’re doing in the operating system, in the virtualization, in the application, the management, and the process. Bringing all of that to you so that you can benefit from that, and then be consistent across clouds as I’m going to talk about in a little bit more.
So as we were designing these R2 releases, a couple of big design principles for us. First, almost every organization we talk to tells us they’re going to be running multiple clouds, or consuming capacity from multiple clouds. So the ability to make it easy and seamless for you to consume multiple clouds and just stretch and have a datacenter without boundaries was one of our principle design features.
I couldn’t think of a better way to term the next one, but it’s just cloud innovation everywhere. Innovation in storage, networking, compute, in the workloads, stretching the infrastructure out to Azure, and stretching the infrastructure out to your service providers. Literally cloud innovation everywhere.
And what this delivers is the ability for you to deploy your applications dynamically, quickly, literally, ladies and gentlemen, these R2 releases we’re talking about with you today is us bringing the things that we’re doing on these cloud services and in Azure to you. That literally is what we’re doing.
So, again, today what we’re pleased to announce is Windows Server 2012 R2, System Center 2012 R2, we want you to see how some of the organizations have been using the assets that we’ve been building. Since we’re in Spain, we thought this would be a great one for us to talk about. Telefónica, one of the world’s largest organizations, have been doing some incredible innovation in their datacenters. And what they’re doing is they’ve really consolidated their infrastructure. Let’s take a look at how they’re using the Microsoft capabilities.
(Video segment.)
BRAD ANDERSON: Wow, I mean really wow, 18,000 servers being consolidated down to 1,600 servers. The world’s largest tier four datacenter, tier four means everything, absolutely everything, is redundant. Five nines of guaranteed service, that means 5 minutes of downtime a year. And I love some of the comments that they gave in there about the challenges of bringing everything together, doing that on top of Hyper-V, because VMware was just too expensive. If you ever had the question is Hyper-V and the Microsoft cloud capability ready for prime time, ready for your mission critical, certainly the answer is yes. And so I’d encourage you to really take a look at that.
But, it gets better. So today, again, the availability of the preview bits of Windows Server 2012 R2, and the entire System Center Suite 2012 R2, available for your download today.
OK, yes. Come on, give it up, a big deal, everybody. (Applause.) In less than a year we’ve turned around major releases and I can tell you one thing, we’ve done a heck of a lot more than a year’s worth of innovation in these products. I think when you get ahold of them and you get a chance to see what we’ve done in there, using these cloud-first design principles, and using some of these new changes in our processes, where we literally learn from the cloud and bring it to you, you’re going to see an incredible amount of value delivered in a very, very short period of time.
Let me talk for a minute about our strategy about hybrid cloud. When we talk about hybrid cloud, fundamental and at the core of everything we do is a belief that you should just have consistency across your clouds, consistency for virtualization, for your developers, for your management, for identity, for data, consistency, consistency, consistency. Now why is this important? None of you want to be locked into a cloud. You want to have the ability to move your applications, your virtual machines between a private cloud, a service provider, Azure, at will in a friction-free environment.
And by delivering consistent capabilities across those clouds, that’s what you get when you use the Microsoft Cloud operating system capabilities. You don’t have to go through a migration. You don’t have to go through a translation of the VMs. You can move your VMs at will, friction-free, because we are delivering consistency across those clouds. And as we have built these R2 releases we’ve considered all three of those clouds, assuming that most of you will run in a hybrid T environment.
Now let’s get specific in one of the examples. This morning we’re also announcing the availability of what we call the Windows Azure Pack. The Windows Azure Pack is quite literally taking the capabilities that we’ve been developing and running on top of Azure on top of Windows Server for your benefit. So things like the ability to do high-density Web server hosting, OK, in Azure we host 4,000, 5,000 Web servers on a single Windows Server instance. You have that capability with the Windows Azure Pack. The experience, when you come up to Windows Azure and you get that self-service experience in the portal, that portal experience delivered to you to run in your datacenters for your customers to take advantage of.
In Azure there’s a set of capabilities called Service Bus, and Service Bus is this ability to take your applications that are using multiple clouds and keep that all in sync, delivering Service Bus to all of you to run in your clouds. Those are the kinds of capabilities that you get with this Windows Azure Pack. And again, at its fundamental base is delivering consistency and our learnings from Windows Azure to you to run in your datacenter.
So let’s take a look at some of these capabilities. And what we want to start out showing you is this incredible experience that’s been pioneered on Windows Azure for use in the public cloud and how we’re going to bring that for you now to run in your datacenters for your customers, for your tenants.
Let’s welcome Clare onto the stage. She’s going to give us a demonstration of the Windows Azure pack.
CLARE HENRY: Thanks, Brad.
Good morning. It’s an honor to be here today and show you the Windows Azure Pack and how Microsoft is delivering consistency across clouds. With the Windows Azure Pack we take code that was built for the cloud in the cloud, and deliver it to you. Add Windows Server, add System Center, and you have an integrated stack, a stack to build your clouds, clouds that are consistent with Azure. Let’s take a look. Right away this looks just like Azure. And as Brad said, I have the same ability to deploy high-density websites, virtual machines, SQL databases, but take a closer look. This is the Contoso cloud, built on the Contoso Windows Server and System Center infrastructure. And I’ve logged in today using my Active Directory credentials.
Taking a look at some of my deployments here you can see I have a couple of standalone virtual machines. These could be Windows Server, they could be Linux distributions. I also have this new virtual machine role. This new role type allows me to take an application, deploy it into the virtual machine and then scale out using multiple instances.
Next I’d like to show you how easy it is to provision one of these new virtual machines. Go into my gallery and because I’m logged in as Clare, I’m presented with a set of IT-approved workloads. With the Windows Azure Pack we simplify your ability to create these standard offerings. And as you might expect in my datacenter here I have some of the most common workloads, Active Directory, SharePoint, SQL Server.
I’m going to deploy a new Web portal, so I’m going to choose IIS, move through my wizard here, I provide a name. Next I’m given this opportunity to choose a version. This ability to version allows you as the IT administrator to start delegating and automating those post-deployment maintenance activities, think software update, patching. Moving through the wizard I’m next presented with a scoped set of options, things like initial instance count, my VM size, my hard drive size, my network configurations. As the IT administrator you set these options. This is where you embed your policies and yet at the same time give your customer the right set of choices.
I finish with my credentials here, my password, confirm my password, now my virtual machine is configured. The next thing to do is to configure the application and those workloads and applications you saw in the gallery, each one has a unique set of configuration parameters. What I’m looking at here are specific to IIS. I could go ahead and confirm these and take the defaults and start the deployment.
That takes a few minutes, so instead what I’d like to do is show you the other dimension of self-service and that’s management. So going back and looking at this particular virtual machine that’s deployed, I’m going to go ahead and click on it, and you’ll see that I get some really great insight into my current usage patterns. Notice I’ve just deployed a couple of cores out of the 50 that are allocated. I’ve barely started using the memory and storage that’s been allocated, as well.
Take a look, my IT department has provided me with an update. Now I could with a simple click down here, I could choose to update now, or I might want to wait until it makes more sense for my business. Now, think for a minute about all of the changes that constantly occur in your business. As the line of business owner I want to have the flexibility to scale out and then scale back in to meet this constant flux and this constant change. I currently have two instances deployed; with a simple click I could quickly deploy a third, a fourth, or a fifth.
Today I just want to deploy one more, so I simply hit save, confirm my deployment, and now the deployment has started. And, because this is powered by Windows Server and System Center, you can rest assured it is fully automated, yet bound within the parameters that you’ve set. What I have shown you today is truly unique in the industry. Combine your skills with the integrated stack of the Windows Azure Pack, System Center, and Windows Server, and you have the right combination, the combination to build what’s next in your organization.
Thank you and have a great week. (Applause.)
BRAD ANDERSON: All right. Thanks, Clare.
All right, so let’s go a little bit deeper now. She talked about this consistency across clouds, and as we think about this consistency across clouds we really kind of think about it in two ways. We think about the infrastructure itself, the fabric, so work that we’re doing in compute, storage, and network. And then we think about it in terms of the workloads. And this also actually translates into two different experiences, as well. There’s an experience for the IT people who are building the fabric, and then there’s the experience for the consumers of the cloud, and that’s what Clare just demonstrated to you. We’re showing you that exact same experience that we have in Azure, now that you can use for your users, whether that be the line of business, your devtest, or if you’re a service provider your tenants, OK. And that’s that upper half of this. We’ve done a lot of work on the workloads.
Now interestingly enough, there’s a couple of things I want to talk about, some significant announcements today. First of all, we’re announcing a world record in an SAP configuration. So since 2011, Windows has been the platform that holds the record for the most scalable, or the world record of SAP configurations. On Friday we published with Hewlett Packard the world record of a three-tier SAP configuration running in a two socket box, 30 percent better than the world record that VMware had published a couple of years ago. So, again, this is on today’s technology, this is on Server 2012, and I’m going to show you in a couple of minutes how it even gets better in these R2 releases.
And another significant announcement, and this is with Oracle, and this is one of those examples, I think, of two organizations doing exactly the things that all of you our customers want. We’re announcing a partnership with Oracle where this is really about making sure that you can take the Oracle workloads and they will be fully supported on the Microsoft private cloud on Hyper-V, as well as in Azure. OK. You’re going to have license mobility. So if you have Oracle licenses and you want to move those to Windows Azure, with this partnership you now have the ability to do that. OK.
We’re going to have a fully supported Java running inside of Azure from Oracle. And then we will actually be putting out some of the standard configurations for the Oracle workloads, like database, and WebSphere, and those will actually appear in the Azure gallery. So a significant partnership with Oracle and Microsoft, again, doing what all of you expect of us to do, making these things run together in a very, very effective way. Now again, that’s on today’s technology.
I want to talk now a little bit more about the bottom half of this diagram on the right, and I want to talk about some of the things that we’ve been doing in the fabric. OK. I talked about cloud innovation everywhere. One of the things when you’re operating these large clouds, and again you’re deploying hundreds of thousands of servers, you innovate, and you innovate, you automate, because everything has got to be automated. Anything that requires human intervention is cost, is time, and is an opportunity for error. For example, in Azure, we make more than 50,000 network changes a day. Everything has to be automated and you are relentless in automating and driving efficiencies, and driving out cost.
Now how are we bringing this to you? What we’re going to show you here in a minute is some of the things that we’ve done in these R2 releases, things like the ability to do a live migration between two different versions of Windows Server with zero downtime. Innovation and storage, where you get scale-out, high-availability storage on commodity, cost-effective hardware. OK now, we’re not saying we don’t like SANs, we know we’re going to be using SANs for a long time, but in our cloud services at Microsoft everything, everything, everything runs on standard, direct-attached storage. And through the magic of software we’re delivering a lot of the capabilities, or the capabilities that historically you’ve had to go to SANs for on commodity, cost-effective hardware, driving your costs down.
And then from a software-defined networking standpoint, the ability to do tenant isolation, the ability to seamlessly stretch your organization out to a service provider, or Azure, and then wrapping all of this up with things like disaster recovery, backup, high availability, to service providers across your own datacenters, or onto Azure, in the box with these R2 releases. So just an incredible amount of innovation in the fabric, less than if you missed looking at that.
Let’s invite Jeff who is going to come and show you what we entitle here Cloud Innovation Everywhere, and show you some of the innovation. Give him a hand.
JEFF WOOLSEY: Thanks, Brad.
(Speaking Spanish.)
Windows Server and System Center 2012 R2 are about making your business more agile, your datacenter more flexible, and providing you cloud-optimized infrastructure. Let’s start by taking a look at storage. In Windows Server 2012 we introduced Spaces to provide flexible pooling of disk, to provide flexible, fault-tolerant storage. In Server 2012 R2 we’re taking Spaces to the next level, by dramatically improving performance and scale, while lowering your costs per gigabyte, and more importantly per I/O. Let me show you.
Here I’ve got a 17-terabyte pool; this pool consists of 20 hard disks and 4 SSDs. The hard disks provide me capacity, while the SSDs provide me fantastic I/O performance. Now wouldn’t it be great if the storage just automatically moved the hot blocks from the hard disk into the SSD. We think so, too. That’s why we’re introducing automatic storage tiering into Windows Server 2012 R2. Let me show you. Let’s go ahead and create some tiered storage out of my pool. And I’m going to create storage tiers. And now it’s going to ask me do I want a simple, or a mirrored layout. I’d like a mirrored layout for better reliability. Now I can go with the two- or the new three-way mirror, for better resiliency. And now it’s going to ask me to specify the sizes of each of the tiers. Now to keep it simple I’m going to choose the maximum size for both the SSD and the hard disk tier, and just like that I’ve created my tiered storage.
Now, to show you the performance impact of tiered storage, what better way to do that than with a SQL load test. Here I have a couple of servers, identically configured, except to different back-end storage. One of them using direct attach, the other one is using the Windows File Server. You can see right now that the Windows File Server delivers spectacular performance, in fact, it’s on par with direct attach storage, delivering over 7,400 IOPS per second. But, now let’s run this same test this time with storage tiering enabled. Make sure you’re holding onto your seats for this one, because we’ve just gone from over 7,400 IOPS to over 124,000 IOPS with storage tiering. That’s over a 16x performance improvement. (Applause.)
Keep in mind, to deliver this with traditional storage you’d need over 360 15K SAS drives. We’re doing this with 20 hard disks and four SSDs. So now that you’ve seen how we’re going to reduce your cost per I/O, let me show you how we’re going to reduce your cost per gigabyte. Let’s go over here to our volumes and you can see I’ve got two volumes, D and E. And in fact, these are identical and they’re filled with a whole bunch of virtual machines for a VDI deployment, except that you can see that Volume D is nearly full, while Volume E is saving almost over 230 gigabytes, because it’s using Windows de-duplication. Now wouldn’t it be great if I could use my de-duplication with running virtual machines for a VDI deployment?
Of course, to do that you’d want to ensure that there’s no performance penalty. Well, what if I told you we could give you de-duplication and actually improve performance at the same time. It sounds too good to be true, right? Well, let me show you. Let’s go to the split screen for just a moment here and here you can see I’ve got 10 virtual machines. Five of these virtual machines are running on de-duplicated storage, the other five are running on non-de-duplicated storage. In the split screen you can see the connection window, so you can see the virtual machines as they’re booting up.
Over here we have this little tool; this just actually started all the VMs simultaneously. The other thing it’s doing is it’s monitoring the IP addresses. So what you can see is you can see the virtual machines as they’re booting up. And you can see that the VMs running on de-dup storage are booting up faster, a lot faster, in fact, over twice as fast. How is this possible? Well, with de-duplication we know where all of the common blocks reside, so this gives us the opportunity to do intelligent caching, which means not only do you get better storage efficiency, but you get better storage performance with de-duplication.
Speaking of better storage performance, last year at TechEd in Europe, we showed something the world has never seen before. Last year, we demonstrated in Hyper-V that we’re able to deliver over one million IOPS from a single virtual machine. Those were 4k IOPS. This year we’re going to raise the bar. This year I’m going to run the test again. This is using Windows Server 2012 R2, but this is using 8k IOPS. And what you can see here is we’re delivering over 1.6 million IOPS from a single virtual machine. This is over a 60 percent performance improvement, and we’re delivering over twice the amount of data because we’re doing this with 8k IOPS. All of this built into Server 2012 R2.
Now let’s get to one of your favorite topics, live migration. Now in Windows Server 2012, we led the industry by providing the first hypervisor with shared-nothing live migration. Now we’re going to lead the industry again, this time in terms of live migration performance.
So here I’m going to start a couple of live migrations. In fact, in all of these tests, I’m using the same virtual machine. The VM has eight gigabytes of memory assigned to it, and it’s running a SQL workload inside the VM. The VM is actually running really hard, it’s under load, and there’s constant memory churning. The first test is using Windows Server 2012. The second one is using Windows Server 2012 R2, and it’s using live migration with compression.
With compression we’re taking advantage of the fact that servers ship with an abundance of compute hardware, and the fact that we know that Hyper-V servers are rarely compute bound. So we’re using some of those compute resources to compress the VM inline during live migration, which means it’s more efficient. And, in fact, it’s much faster. And, in fact, it’s already done.
Now, we could have just stopped there. Live migration compression is awesome. But we didn’t. You see in Windows Server 2012, we also introduced another new technology, SMB Direct using RDMA. And we delivered that in the file server. That allows us to deliver fantastic performance for the file server.
So now we’re going to start up the third test. This time we’re using live migration using SMB Direct over RDMA, again, the same exact virtual machine, and we’re done. (Applause.) Not only was it done faster, but it was done with no CPU utilization because this was all done with RDMA. So that’s just a couple of the live migration capabilities. As Brad mentioned, we also have cross-version live migration as well.
So finally, let’s get to one of your other favorite topics, Hyper-V Replica. Now Replica is our inbox, easy-to-use virtual machine replication built into Windows Server 2012. And we’ve been listening to your feedback very carefully. In fact, you’ve told us quite honestly you love it.
But you do have a couple of requests. No. 1, you want to be able to manage it at scale. But, No. 2, more importantly, you want to be able to manage it across sites. So, for example, here I’m using Virtual Machine Manager in System Center to manage my on-premises private cloud. However, this is in Madrid, and I also want to be able to do site migration and do disaster recovery to Barcelona, or Shanghai, or New York. So we’ve heard you loud and clear. We understand that. And we want to provide you that centralized management console to orchestrate your replication across site. That’s why we’re introducing Hyper-V Recovery Manager. Hyper-V Recovery Manager is that orchestration that allows you to replicate and manage your replication across multiple sites. Hyper-V Recovery Manager is an Azure Service. So there’s no complexity and there’s nothing to install, because it’s already waiting for you in the cloud.
Now, one important point I want to make about Hyper-V Recovery Manager is the replication is still point-to-point. None of the replication goes to Azure. Azure Hyper-V Recovery Manager just orchestrates the replication. So as I just showed you with VMM, I was showing you that I was, in fact, running a whole bunch of virtual machines on my premises. Well, in fact, here are those same protected items, here in Contoso Corp., and, in fact, here’s my primary site. And I can quickly come here and see that all of my virtual machines are being protected. I have a quick glance and go. In addition, to do site migration and disaster recovery, you want to make sure that you’re being able to do it in a systematic and orchestrated way. You want to make sure that you bring up groups of virtual machines in the correct and prescribed order. That’s why we have recovery plans. You can think of this as your runbook in the cloud.
So, for example, I have a pre-failover set. In fact, this is a script. In fact, here’s my first set of virtual machines. Here’s my second set of VMs and so on. So it’s not only virtual machines, it’s scripts, and these could even be manual tasks. And what happens when that day arises when you want to perform that site migration or disaster recovery? You simply click here on failover. I’m going to click on planned failover. I’m going to confirm the site that I’m migrating to. And I’m going to click on OK. Just like that I’m migrating to my second site.
One other thing to point out, since this is all running on Azure, I could have done all of this from my phone. DR and site migration has never been this easy. (Applause.)
So think about what you’ve seen, storage tiering, de-duplication, live migration capabilities, Hyper-V Recovery Manager, these are just a few of the dozens of reasons why Windows Server 2012 R2, System Center 2012 R2, and Azure is the best way to cloud-optimize your business.
Thank you very much. (Applause.)
BRAD ANDERSON: Great job, Jeff.
You know, one of the biggest challenges we have in putting this keynote together is that there is so much innovation in these R2 releases, just trying to select what we wanted to kind of show you was amazing. So I hope you take full advantage of the week and actually go to the sessions, get your hands on. The reality is I think many of you are actually downloading the bits in the room right now, because the wireless has just been flooded here in the last hour.
So let’s just summarize real quickly. As you think about the transformation in the datacenter, literally taking the things that we’re learning every day, day-in and day-out, inside of Azure in these 200-plus services, bringing them to you to run in your datacenter, giving you the ability to have a datacenter without boundaries, innovating everywhere in the hardware, storage, networking, compute, in the workloads, partnerships with the key workload providers around the industry. These are the kinds of things that we’re delivering to you to run in your datacenter.
So think about what we’ve talked about here in the last 90 minutes. We started talking about the work that we’re doing to enable your users to be productive on their devices, and some of the things that we’re allowing you to take advantage, or enabling you to take advantage of the BYO trends.
We then talked about the innovations in what we’re doing from an app platform in Visual Studio and in Azure to enable you to build even better apps and more rich apps than in the past. Data and gaining insights from those data through the work in SQL. And we just ended with the work that we’re doing and how we’re helping you to transform your datacenter.
It’s interesting, people will ask me, Brad, what’s Microsoft’s vision around the cloud? It’s really simple, our vision is what we call the Cloud OS. And it’s what we’ve spent the last 90 minutes talking about. The Cloud OS delivers four promises: It enables you to empower what we call people-centric IT; it allows you to build modern or rich applications; it enables you to unlock the insights from any data; it allows you transform your datacenter. That is the promise of the Cloud OS. That’s the promise we’re making to all of you. We will enable these promises through the work that we’re doing, and enable you to do that across your clouds, service providers, and into Azure.
Now, also just think for a minute about just the number of announcements you’ve seen this morning, a refresh of every one of the products, every one of the products and services that enables this. Windows Server 2012 R2, System Center 2012 R2, new update to Windows Intune, Visual Studio 2013, SQL 2014, new versions coming out in a rapid pace of innovation, incredible value, incredible innovation coming from Microsoft.
This is the kind of value that you can expect to see us deliver time and time again as we really take these things that we’re doing in Azure and these cloud-first design principles and bring them to you.
Now let’s take a look at how one organization is using the entire spectrum of these capabilities, and let’s take a look at how Aston Martin is using the Microsoft Cloud OS.
(Video segment.)
Let me just take a moment and offer our condolences to Aston Martin. As many of you know, there was a tragic accident at the Le Mans 24-hour race, and one of their drivers died. Our thoughts and prayers go out to the entire Aston Martin family.
But when you think about Aston Martin, what comes to mind? Quality. Quality and luxury. A couple of interesting data points, in the new Vanquish car that they have, there’s over a million stitches in the interior of that car. This is actually the 100th year anniversary of Aston Martin. In their history, they’ve built 65,000 cars. Listen to this statistic, 90 percent of those cars still exist today. Quality, luxury in everything they do. And that certainly is the perspective that their IT department has. And I’m proud to talk about how they have embraced working with us — the spectrum of capabilities across their entire spectrum — with what our vision is with the Cloud OS.
So thank you for being here. On behalf of all of Microsoft we so appreciate the ability to partner with you and your businesses. It is truly an honor and it’s a stewardship that we take incredibly seriously. And again, I want to express just how grateful we are to be able to work with all of you.
Now in terms of continuing the conversation one of the things I hope that you will do is immediately following this session there is what we call a Foundation Session for all those major areas. So depending upon what your specialty, or your area of focus is, whether it’s people-centric IT and data, in development, in the cloud platform, I would strongly encourage you to attend the Foundation Sessions that start right after, because you’ll get a deeper click in all the demos, you’re going to get a deeper click in all of the new announcements that have been made today. And that’s a great way to start off TechEd Europe.
Again thank you. Thank you for being here. Have a wonderful, wonderful week. (Applause.)
END