Brad Anderson: Microsoft Management Summit 2011 – Day 1

ANNOUNCER: Ladies and gentlemen, please welcome Brad Anderson, corporate vice president, Microsoft Management and Security Division. (Applause.)

BRAD ANDERSON: Hey, good morning, everybody. Well, it is inspiring and awesome to stand up here and look out at all of you. You know, it’s an amazing opportunity to be here and represent the work of literally thousands of engineers that work across Windows and System Center, the Desktop Optimization Pack.

They’ve got another sell-out event here for us, and we are incredibly grateful and appreciative of all of you and your willingness to come here, spend this week with us.

You know, I’ve never in my career been more excited to be at an event like this. You know, in terms of innovation and in terms of value and the work that you’re going to see this week, I’ve never seen anything like what we’re going to show you this week coming out of a single team.

You know, as I take a look at what the execution of all the organizations that work on this and the value that that’s generating, you know, I hope you see what we see in just incredible innovation, incredible value — and give us feedback. You know, help us understand, are we in the right areas? Are we not in the right areas? But I think you’re going to be delighted, and like I said, I have never been more excited to be here.

So, in terms of what we’re going to do for the next couple of days, you know, as I think about the largest trends, and I think about the things that IT is grappling with every day, the two that just come to the top of mind are the cloud, cloud computing and the consumerization of IT, and just the growing number of devices that are out there, and the way that users want to work. So, that’s how we’re going to structure the presentations.

I think the most interesting thing about the cloud and the consumerization of IT is where they come together. You know, these devices that are being used now around the world are actually made for the cloud. They’re always on; they’re always with your user; they have different kinds of antennae in them. There are things that we can now contemplate and do that we’ve never been able to do in the past that we’ve just dreamed about.

So, today we’re going to focus on the cloud and cloud computing, and really we’re going to focus about how we’re going to empower all of you to take advantage of the cloud. And then tomorrow, we’re going to focus on the consumerization of IT and how you can empower others through what we’re delivering to be productive, where, when, and how all of your users want to be.

So, let’s talk now about the cloud now for a minute. And it’s really interesting, as I get out and I talk with all of you and I ask you what you think about the cloud, I often hear quotes like what you see up here: “What is the cloud? You know, is it just a set of fancy catchphrases that’s kind of lacking technical backing? You know, I see opportunity in it, I hear.” I hear many of you say, “You know, these public cloud offerings, I see that as a way for many teams and organizations to kind of go around IT.” I hear from many of you angst: “How does it impact my job? How does it impact me personally? How does it impact my IT organization? “

So, what I want to talk to you about is, you know, I fundamentally believe that the cloud provides opportunity for you personally in your careers, and it certainly provides opportunity for you to advance and differentiate your businesses.

So, what we have been focused on is making the cloud approachable for all of you. And the typical way that Microsoft approaches a problem, how do we bring this to market in a way that simplifies the concept, simplifies how you use it, simplify how you deploy it? And I think what you’re going to see throughout this morning is what we’ve done is make the cloud very, very approachable for you.

Now, let’s just kind of ground ourselves in what I mean by the cloud. I’m talking about the cloud, I want you to think about, I’m talking about a compute model, not a location. So, when I talk about the cloud, I’m not talking about necessarily the public cloud or the private cloud, I’m talking about the model that underlies wherever the cloud may be running.

Now, there are a certain set of attributes and characteristics that are just fundamental and core to cloud computing. Things like the ability to offer a self-service experience so that the application or service owners can do real-time deployment of the services. It runs on a shared infrastructure, and a part of our job is to make sure that we’re taking the fullest advantage of that shared infrastructure.

Cloud computing, you know, builds applications that are able to dynamically expand and contract as the business needs. And then, finally, it’s usage-based, meaning that you can track what’s being used in terms of storage, compute, network, bill back, if you want to, and at least at a minimum, show back so that the service owners, the business units understand what the cost is of what they’re consuming. These are the core attributes of the cloud.

Again, this is independent of location. The industry will talk about infrastructure as a service, platform as a service, software as a service. We fundamentally believe that every organization and each one of you is going to have your own unique journey to the cloud. You’re going to be, you know, the majority of organizations are going to be consuming cloud capacity from multiple locations, your datacenters, your partner datacenters, Microsoft datacenters. Most of you will be running in a hybrid model.

So, our job and the role that I see my organization playing is enabling you to approach the cloud on your terms. Make the decisions that make the most sense to your organization, and then we’ll bring that together in a consistent experience, a consistent model, and take all that diversity and all that challenge, if you will, of trying to look at capacity across multiple clouds and give you that consistent experience.

Now, in Microsoft, we’re investing in all three of these. We’re investing in infrastructure as a service, platform as a service, software as a service. And what I think is unique about that is that gives us unique insight and learnings that we’re going to bring to you.

So, in terms of what I think as you think about Microsoft and why would I go with Microsoft, as I think about how I want to utilize and consume the cloud, keep in mind — and I’m going to go through this next couple of slides — we are investing in all three of these. We’re bringing these learnings to a datacenter near you or to your datacenters, and it’s all about enabling you to approach the cloud on your terms.

Now, you know, Microsoft opened its first datacenter in 1989 and since then, you know, we’ve opened lots and lots of datacenters. We’ve spent billions and billions of dollars on this. You know, this is just a couple of the datacenters that we’ve opened. You know, we have over 200 services that run on this, services like Bing, Communicator, Windows Intune — there are literally over 200 services that we run on top of all these datacenters.

And it’s fascinating, each one of these datacenters as we build it and as we put the services in there, we learn a little bit, and it’s amazing, datacenters that we thought were world-class in terms of the cutting technology a year ago are now obsolete.

I’m just going to point out a couple learnings here, and then I’m going to go through some more experiences with it.

You know, in our Quincy, Washington, datacenter it’s all powered by hydroelectric. Our Chicago datacenter has over 700,000 servers in that single datacenter. In San Antonio, we use all recycled water for the cooling. And one of the fascinating things about Dublin is that we built that, we realized we could use the ambient or outside air for the cooling, so we actually don’t even use water for the cooling. So, making a lot of investments in how we actually leverage and do the right things for the environment is just amazing.

Now, as we’ve built out these datacenters and we’ve built out these services that we deliver, we’ve always had two fundamental goals in mind. The first one is to deliver world-class services that delight our customer base, and second, to take that learning, permeate that through all of Microsoft, and then use that as the foundation for the software that we deliver to all of you to run in your datacenters.

As I think about why we’re here at the Microsoft Management Summit today, you know, that second goal is just core about what we want to talk about.

What we’re going to share with you over the next hour is the learnings that we’ve had from these 200 services that run in these datacenters around the world on hundreds of thousands of servers and how we’ve used that learning over the last four or five years, permeate it through the engineering organization, to deliver what I believe is the most simple, the most complete, and the most comprehensive cloud solution for all of you to use.

Now, let’s just spend a couple of minutes talking about what these core learnings are. So, I’m going to walk you through an example of how our datacenters look, and then over the next hour, we’re going to walk through exactly how we’ve taken these principles and these concepts and implemented them in our software that we’re delivering to you to help you build a private cloud.

And from this point forward, the majority of my conversation after I go through these learnings is going to be focused on you building on private cloud and how to take in these learnings.

So, the first thing is standardization. And I know all of you try, and you strive to be standardized in your datacenters, but in these cloud datacenters that we have at Microsoft, you know, I think about the only way to describe this is to say it’s extreme standardization. And “extreme” is the right word. As we go out and we buy servers, we buy servers in tens of thousands at a time.

You know, as we get our storage and our network, it’s all the same. You can think about this in terms of things like, you know, Southwest Airlines talks about one of the reasons they can keep their costs so low is they buy one model of airplane. And so they have to stock less parts, and they have to have less expertise.

It’s very similar in our datacenters. We standardize to the extreme. The next thing is we understand that each of these services have unique requirements. Some need to be optimized for transactions. Some need to be optimized for storage; some for computation. So, each service is given a set of boundaries, a cloud, in which it’s empowered to run, and then those service owners can configure and optimize that set of resources in terms of compute, storage and network that’s optimized for the unique needs of that service. OK? So we actually enable the business to do what they need to do.

Third, and this is a really important point, the applications are built with an understanding that failure happens. So, the applications are architected with an inherent understanding in the architecture the servers are going to go down, that disks are going to fail. And they’re architected in a way where there’s no dependence on a single server, the state of the application is separated from the operating system, so when something does happen, the service seamlessly moves that work to another disk, to another server, and it’s all taken care of in the architecture of the application. And that’s what I mean here when I talk about a cloud-style application.

With this kind of standardization, we get a really rare opportunity, which is we get to re-imagine how we do processes. We automate the daylight out of everything through rhombic automation. And then we put rigorous change control in.

Let me give you just a couple of examples of these re-imagined processes. When disks go down, you know, we don’t go out and replace the disk in this rack until that particular rack has kind of hit a point where 10, 15 percent of the storage is no longer functioning, then we dispatch somebody out to do it.

When a server fails, we re-image that server remotely, and if it doesn’t come up, we assume there’s a hardware failure, and we go and replace the server. But it’s all about understanding what we can do when you’re trying to run at a scale when you have hundreds and hundreds or millions of servers and then using tools like Opalis, which, you know, today we’re announcing the name of that product is System Center Orchestrator, to do your rhombic automation to have that rigorous ability, predictability to have things happen time after time in the right way in a predictable manner and take kind of the human error out.

And then finally, because we’ve built all this, we have this architecture, the standardization and each service has their own face that they can own and play within — we give a full self-service experience so that the owners of the servicing, the owners of Bing, the owners of Communicator, the owners of Hotmail have full control within the constraints that they’ve been given by the team that manages the infrastructure in a self-service experience to do what is right for their particular service or their particular business.

These are the learnings from the public cloud that we’ve been building for many, many, many years. And now what I want to go through is start walking you through how we’ve been delivering this, and it starts with what’s available today.

So, today, you know, we have Hyper-V cloud out in the world. It’s doing phenomenally well for us. It’s being adopted at rates that we’re just ecstatic with, but it’s a combination of Hyper-V and System Center. It uses the tools that you’re all familiar with. Today, System Center manages more Windows servers in the world than any other solution on the market and it builds on top of those familiar tools that you’re accustomed to.

There’s a great partner ecosystem around this. You know, we’ve worked with all the OEMs around the world who are doing that work. So, for example, with Dell, we’re integrating with AIM and with VIS and some of the innovation that they’re doing there. With Hewlett-Packard, we’re integrating with Insight and with Matrix. And so, you know, we’re working across the industry to ensure that as you go out and you make your decisions on who is going to be your cloud, you know, from a software perspective and who’s going to be your hardware provider, we’ve done that integration.

Now, it’s interesting in terms of this is the second year we’ve done what we call a server aquarium. I would really encourage you to go take a look at this. HP’s gone out and built a server aquarium much like they did last year, and it’s actually the platform on which we are running all the VMs when you go to your hands-on labs. So, at any given time during a day, there’s more than 400 hands-on labs that are simultaneously going on, and using this infrastructure that HP has built here, we are literally able to provision 1600 virtual machines for those 400 users in less than 10 minutes.

You know, last year we built this where we took 40 different servers, and we partitioned it to six clouds. This year, this is 32 blade servers that’s been partitioned into a single cloud that is running all the hands-on labs. So, I’d really encourage you to go take a look at that and take a look at what that experience is. It’s a great example of how you can use the technology from Microsoft and partners in building clouds.

Now, one of the things that I’ve heard loud and clear from all of you as I visit with you is, you know, Microsoft, I bet you to work across Microsoft and deliver the most optimized solutions for your workloads on your virtual solution. Loud and clear. You expect that, we expect that of ourselves. And last week, an external company — you can take a look here, this organization called the Enterprise Strategy Group published a set of findings where they went and tested some of the most common workloads from Microsoft — SharePoint, SQL, Exchange — on the Microsoft Virtualization stack.

Now, while this wasn’t a comparative test, I can assure you that if you go out and compare this with anything else on the market, running the Microsoft workload on the Microsoft Virtualization stack is the most performant, is the most complete, is the most comprehensive solution. We do work across all of Microsoft to ensure you have that best experience.

Now, that’s not me talking, that’s an external organization talking. One of the things I thought would be interesting here is to actually let a customer that you’re all aware of, an incredibly well-known brand, talk about how they’re using the Microsoft virtualization solution today that’s in market to run their business. So, let’s take a look at what Target’s doing.

(Video segment.)

BRAD ANDERSON: I think the most significant thing that I would ask you to take away from that, if you had a question, is Microsoft Virtualization Solution ready for your primetime, for your mission-critical apps? The answer is: Yes.

You know, Brad and Fritzer (ph.) here, they’re in their red Target shirts here, I’m sure you’ll get a chance to see them and ask them how they’re using the system. I was actually out visiting them in the second week of December, and I’ll tell you, Minneapolis in December is a place, you know, that’s better to go in the summer than when the high was 6 that particular day; I think I was outside for a grand total of about two minutes.

A couple interesting points to point out here. The application that runs the pharmacy — so the pharmacy application is actually a SUSE Linux application running on Hyper-V, also managed by System Center. And Target manages over 300,000 Windows end points through a single Configuration Manager hierarchy — their PCs, their point-of-sale devices, their inventory devices. You know, so just a great example of how System Center both on the client, on the desktop and in the datacenter is enabling Target to advance their business, and it’s been a great partnership, and we’re really appreciative of that.

OK. So, now the journey continues. I am incredibly proud to be here today and announce System Center 2012. As you think about System Center 2012, think of literally every single product in the portfolio revving within the next year. Virtual Machine Manager, Operations Manager, Config Manager, Service Manager and a couple of new products that we’re going to talk to you about today that you haven’t seen before.

So, as you think about this, I want you to imagine as we go through the next 55 minutes about how you can take advantage of this new innovation and how what we’re going to show you today is going to enable you to approach the cloud, again, as a computing model, not as a location, on your terms and take advantage of all this new opportunity, all this new capability.

Now, I want to set the stage a little bit because there’s some interesting research that we have found as we built this over the last couple of years that deals with the different roles inside of most organizations. And it’s important that you actually lock and load this into your minds because the roles are evolving in most organizations.

We have a role that we see evolving called a service consumer that is working hand-in-hand with the service provider. Now, if you think about the service provider, most of you in this room are going to identify themselves with this particular role. This is the role that is all concerned about building out an infrastructure, providing a service level to the business, to the business units, to the applications. All concerned about, you know, how do I deliver this SLA and a compute storage and network amount of capacity at a predictable price with a given SLA in a world where your budgets are usually constrained? It’s constant, but yet the expectations on you are growing.

OK? This is kind of the infrastructure people that are kind of — that Microsoft has just really lived with for many, many years. And, again, I think this is where most of you will self-identify.

Now, I’m going to talk about this other role, and for many of you, you may wear both of those hats. But what we see evolving more and more as a part of this cloud computing architecture really takes hold is one team or one organization that builds out the infrastructure, and then another team that, through a self-service experience, is given the opportunity to run and operate the applications or services in their business on that infrastructure.

OK? So throughout the day, I’m going to talk about these individuals as the service consumer. And they’re all concerned about simplicity, agility, they want that self-service experience, and they want to get things done now. What happens if the service providers don’t deliver these attributes to the service consumers? What happens? What do they do? They go around IT. They’ll go, and they’ll build an application on Azure; they’ll go build an application on the Amazon infrastructure.

So, many of you will tell me, “Hey, I feel threatened by these public clouds. I feel threatened because it gives — you know, whether it be dev and test, whether it be a business unit, it gives them the ability to go around me because they can simply get capacity from someplace else.”

I think that’s true, but what we’re going to deliver to you is a way that you have a very simple experience to define cloud, delegate those clouds, and then give that self-service experience to those service consumers that give them all these attributes that they’re in need of.

Now, let me set up what we’re going to do for the next little while with the demos. We’re going to start by showing you — this is the first time we’ve ever shown this — a project that’s code-named Concero. Concero is the self-service portal that we have built specifically for the service consumer. So, we’re going to show you the experience that you’re going to be able to deliver to that service consumer that allows them full control, which you delegate to them, which you give them the rights to, in a very simple, self-service experience.

Then we’re going to go show you all the components that the service provider will use to build, manage the cloud, monitor the cloud. And then we’ll come back at the end and show you a little bit more about that self-service experience for the service consumer. Does that make sense?

OK, so we’ll start, and we’re going to bring out Jeremy, and Jeremy’s going to show you Project Concero, again, this self-service experience for the service consumer. Let’s give him a hand. (Applause.)

JEREMY WINTER: Hey, Brad.

BRAD ANDERSON: Hey, Jeremy, welcome.

JEREMY WINTER: Good morning. OK, I’m really excited to be here today to give you a preview of this Concero project, which my team has been working on as a part of the upcoming System Center release.

We’ve heard loud and clear from our customers that you need the control, simplicity and improved productivity between your development and IT organizations. As Brad mentioned, Concero’s about empowering those application owners, giving them the freedom and agility to manage the business on their terms.

Let’s take a look. We have here on the screen a Web-based experience that gives you visibility into the services, the virtual machines and the cloud that they run on. And the key point about Concero, again, is it’s about self-service.

Let’s look at the virtual machines. Now, virtualization has brought rapid consolidation and deployment benefits to your business. And with Concero, you can leverage that existing investment regardless of what technology you’ve used. And by leveraging Concero, you can delegate to the application owner and give them that self-service experience that they need to really improve the benefits of getting it into their hands.

I can come through and do certain functions like start or stop these controls, and I’ll shut down this VM. You can see here that it’s starting to shut down, powering off.

But what matters most to you day to day is your services. That’s what makes your business tick. It’s not about the virtual machine; it’s about the applications that run in them. Concero gives you that consistent view of services, regardless of whether they’re running on Virtual Machine Manager-based private clouds, as well as Windows Azure.

Now, one of the benefits and key advancements that we’ve actually started to build as part of System Center is this notion of a service. As I drill in and look at this service, the service is made up of the logical components of your application. You no longer have to think about individual virtual machines; you can work with this as a single component. As I look at the different tiers, I can scale in and scale out proactively, and I can also come to the top and start, suspend or stop. I can do that control. And behind, you can see that we’re actually starting to power up this entire service.

Oh, and by the way, just because we’re working in a Web browser doesn’t mean it has to be clunky. This looks pretty good, doesn’t it? (Applause.)

OK, let’s jump over into the clouds. Now, with clouds from here I can see the multiple clouds that are available to me. These are what have been delegated to me, and we’ll show you later.

I can see these both in my private environments, again, as well as public. And one of the key benefits that this really allows me to have is I no longer need to worry about the underlying complexities of the infrastructure; that’s abstracted through the notion of the cloud. And it also, by leveraging Concero, starts to set the stage for you to have application portability.

Now, I’m showing you control, but what about provisioning? Before I can deploy that application, I need to ensure I have the capacity. And I can see here that my finance cloud is starting to run tight on some of my memory capacity.

Through Service Manager, I can have a service catalog that’s published on our internal IT portal, which allows me to request additional resources easily. And I can see here that I can modify my cloud capacity. I’m going to go ahead into the request form, select that finance cloud that I originally saw that we were tight on capacity, and choose one of the packages that my IT has published to me.

I can review the submission and get that submitted. It’s quick and easy, it’s in the hands of IT, and I can get on with my daily job. Thanks, everybody. (Applause.)

BRAD ANDERSON: I wanted to start with that self-service experience, and I want you to think about and imagine is you building out private clouds and then getting that self-service experience and that beautiful user interface to allow the service owners — you know, maybe your business units, it depends how your organizations work — to get that experience with that delegated control.

Now, what I want to do now is start walking through the tools that the service provider, the infrastructure people, will use to enable that. OK, so mentally, make a little bit of a shift now, and we’re going to move from the tools that the service consumer would use to the tools that the service provider will use.

First of all, remember when I talked about those learnings from the public cloud. We started with extreme standardization. Now, as you think about your datacenters, how many of you would say your datacenter — from a hardware, you know, compute, storage, network — is extremely standardized?

What I hear when I ask that is, “Well, yeah, I’ve got a little bit of everything.” It’s kind of like the wild, wild west out there.

And so one of the challenges is how do we take your existing environment and basically build a fabric over it. Got Hyper-V; got VMware; got XenServer? You know what? We can help you build a fabric that takes capacity from all of those, different hardware, different storage. The tools that you’re going to see are going to allow us to bring all that, create a fabric on which you can then go out and build private clouds.

And what this allows you to do is to take your existing infrastructure, your existing servers, your existing storage, your existing compute and network, have that represented up to the service owners as a cloud, and all of the complexity that underlies all that down below is hidden from them because you have tools that inherently understands there’s diversity, understands that there is a lot of different things out there in the world, but you’re going to give that consistent experience to that particular set of users from a service consumer standpoint.

So, now we want to show you. Jeremy made a request. He needed some additional capacities in order to provision a service inside of his cloud. What we’re going to do now is going to walk you through the tools that you would use to provision additional capacity, how you would then monitor your fabric and your cloud, and then we’ll continue up the stack as we continue to build out this private cloud.

So, to start with, we’re going to welcome Michael Michael onstage to give us the first demo, and then we’ll continue on from there. Let’s give him a hand. (Applause.)

Hey, Michael.

MICHAEL MICHAEL: Good morning everyone. Today, I’m going to highlight how easy we have made it to say yes to Jeremy’s request for additional capacity. With System Center and Hyper-V, we’ve made it really simple to build private clouds and to delegate the administration of those clouds to your business units.

You’re now looking at the significantly improved user interface of Virtual Machine Manager 2012. Doesn’t that ribbon look awesome? (Applause.) One of the key challenges that we’re all used to dealing with is how much complexity exists in the datacenters today. How components like virtual networks, ISP pools, load balances, class certs, hypervisors — the list goes on. I am using private cloud to abstract this complexity from application owners so they can manage their services easily on their own terms.

Let’s take a look at a few of the essential components of the private cloud that Jeremy is using. As I’m going through the cloud properties, keep in mind that the cloud enables me to deliver infrastructure that I am responsible for, and I manage in the datacenter. Jeremy is using compute resources from the finance cloud. So, I’m going to go ahead and open up his properties here.

You’re now looking at the entire compute capacity of my datacenter. The finance cloud is utilizing shared compute resources from the Las Vegas host group. The Contoso corporate network is providing network connectivity for services in this cloud. I have already configured a couple of load balances and a virtual IP profile. Jeremy wanted to use rapid provisioning of virtual machines. That gets him access to my high-performance storage area network.

Now, Jeremy is asking me for access to additional capacity, and he would like more resources. With VMM 2012, I have two levers that I can use to increase the capacity that Jeremy is seeing. I can deliver more resources without adding more property by increasing the capacity of the cloud. As you can see here, I have already allocated the maximum capacity of my property to this cloud. So, my other option here is to provision additional physical servers to this property.

Jeremy doesn’t care about the property. All he wants is access to the physical resources quickly. I can build my cloud on top of Xen, ESX or Hyper-V. Everybody notice that? We now support Citrix XenServer. In fact, we’re the only product that offers heterogeneous management across all major hypervisors. Jeremy has a choice in hypervisors, and just like Target Corporation in the video you saw earlier, he chose the platform with the best value, Hyper-V.

In the past, requests for additional capacity came through e-mails and phone calls disrupting my workday and causing me to jump through hoops like a lemur. With System Center, I can provide my application owners with a simple set of standard offerings.

Let’s take a look at how we can provision a new physical server into my environment and increase the capacity of the cloud. We’re now adding a new Hyper-V host from a physical computer. Let’s keep IPMI here as the out-of-bank protocol and select an administrator account.

I’m going to enter the IP addresses of a few physical machines that I have already brought into my environment and start the discovery process here. VMM is now contacting the ILO of each one of the physical servers in this ring, and it is attempting to authenticate with the base port management controller using the credentials it provided. We’re using the IPMI protocol in this case here. However, VMM 2012 can communicate with bare metal servers using standards-based protocols like IPMI, DCMI and SMASH. Once it selects a new server for provisioning, VMM will put the server into PXE boot and will deploy an agent that will be used to upload our image. This is an example of technology we are using in our large-scale datacenters today, and we’re bringing this to you.

Here, I have a list of all the servers that can be provisioned from bare metal. Let me go ahead and pick server two or three here and get the ball rolling. I’m using host profiles to help standardize the creation of Hyper-V hosts in my environment. Let me select the Las Vegas host group that is providing the compute capacity to Jeremy’s cloud. We enter a computer name here and paste the MAC address, and we’re ready to provision this new server from bare metal.

This process is going to take a few minutes, but when it is complete, we’ll be able to use VMM to act as new nodes into our cluster and set capacity controls for Jeremy. I can give Jeremy access to the full resources of the new server, or I can just give him part of a service. Does everybody see how easy it was to do bare metal provisioning using VMM?

With System Center and Hyper-V, I’ve been able to quickly discover new servers in my environment and provision them into new resources and increase the capacity of my cloud. This is a better process, and it enables me to say yes to my application owners.

Thank you. (Applause.)

Now, we have our entire fabric set up. We need to do some monitoring here. So, let’s get Vlad up here to show us how to monitor the environment.

Vlad, good luck.

VLAD JOANOVIC: Thanks, Michael. (Applause.)

Now, just as easily, let’s ensure that our fabric is reliable. From System Center Ops Manager 2012, I’ve got a distributed application here where I can see the health of our fabric. From the compute node, I see all the host groups that Michael could see. Within the Las Vegas group is where we’ll see that new node added once the operating system deployment is done, because we’ve got the ops manager agent installed as part of the agent image.

But, sponsoring the compute just isn’t enough anymore. How many times have you been troubleshooting a problem, and it actually turned out to be a network issue, or a physical hardware issue, or a storage problem? This is the part about being a fabric owner that I love because I get to have a whole view into our fabric so that I can be proactive in finding and fixing issues in our fabric. Operations Manager 2012 out of the box can discover and monitor your network.

I see the network devices that are part of my fabric here. But it’s actually the connections between these network devices, and my compute that are important. And now, for the first time ever, from within System Center, I can see the state of these network connections and my compute. I can also see within this network node diagram the availability of my network devices. It looks like I had some problems yesterday, and I see some metrics like the average response times, and the processor utilization.

It looks like there were problems with the response time yesterday, but we’re seeing OK today. I wonder how big that image is that Michael was deploying. I can keep an eye on that right here. And having this network context available within System Center will save you time, as it helps you quickly understand if there’s a network problem.

In addition to monitoring the network, it’s important to monitor the physical aspects of your compute, and also storage and all of the related storage components. And because of our rich partner ecosystem, there are awesome management packs available today to help you monitor all of these physical aspects of your compute, and all of your storage and storage-related components, from partners like HP, Dell, IBM, NetApps, EMC and many others that you can learn about on your Pinpoint Catalogue.

In this environment here, I’m monitoring my physical hardware; this includes things like voltage, power, fan speed, temperature and lots of other metrics, so that I understand the health of our entire physical enclosure. I’m also monitoring the storage and my storage-related components, so that I understand the health of my disks and my SAN, and so that I know when I’m running out of disk space.

As you’ve seen here today, Ops Manager can help you be more proactive in finding and fixing issues in your fabric, so that you can provide the service level and reliability that your businesses demand. Thank you.

(Applause.)

BRAD ANDERSON: I love these guys. I have to ask the question, is his name really Michael Michael? I think his mom must have loved him more than me. I’m thinking about changing my name to Brad Brad. How simple is what they just showed you? Michael Michael walked you through how easy it is to add capacity to the cloud. It’s just as simple to create a cloud. Many of you remember we showed a couple of years ago some prototypes of the work that we’re going to do on the network in concert with the partnership with EMC and the Smarts technology. You’ve seen that all come to fruition now.

One of the great things about technology is I get a chance to actually see what you all are thinking about right now. So, literally, when I went backstage, I went back and took a look at what are all of you tweeting about right now — what’s happening in Twitter? The No. 1 thing that all of you are tweeting about right now is Microsoft is saying that the private cloud is all about the applications. And you’re right. As we think about the private cloud, it is all about the apps. The whole reason why we deployed infrastructure and servers is to get an app, or to get a service deployed. One point I want you to walk away with today is, when you partner with Microsoft to build your cloud, your private cloud, your cloud has wisdom. Let me repeat that. Your cloud has wisdom about the infrastructure and about the apps, and I think it is one of the most unique things that Microsoft brings is that inherent knowledge that we have of the operating system and of the applications that are running on that.

Now, those of you who have been coming to MMS, a couple of years ago we showed you a prototypical  a technology demo. And the demo was all about building a service that would bring in configurations from your server to them do a comparison with best practices. Let me tell you what we’re doing here to actually deliver this. And this is, again, one of those examples where I want you to walk away going, “Microsoft delivers the wisdom that enables me to do my job better, enables me to deliver more value to the business.”

Now, imagine a scenario here, where in a cloud that’s actually running on Azure, this is an Azure service. We can take, using the operations manager agent, your configurations for Windows Server, SQL, Exchange, SharePoint, Dynamics, bring back your configuration and any changes to the configuration back into this service. Then compare your configuration with the knowledge that comes in every day to our customer support organization, that comes through our consulting organization. Take those best practices, proactively respond back down to you and notify you of configuration changes that are going to be problematic that you’ve made, or configuration changes that you can make that would optimize your experience, or an example of a hot fix was released, and how do you find out about a hot fix? It may address a critical issue that you have, but proactively being notified by the service of best practices.

So, today we are announcing a product called System Center Advisor. System Center Advisor does exactly what I just described. It’s going to allow you to use, again, using the operations manager agent, and this works with Ops Manager deployed or not deployed. If Ops Manager is there, we actually multi-home it, and it talks to both Ops Manager and to the Azure service. But, it will track every configuration change you make in these platform-based applications, roll that up into an Azure service, give you a view into that service, and then compare that with all the knowledge and all the wisdom that comes in.

There are two genesis of this project. The first one, when I meet with senior leaders at organizations, one of the questions I always get is, “Brad, how do I take the knowledge that comes across all of Microsoft and get that into the hands of my IT organization?” The second was some fascinating learning that came from what we call the Exchange Rangers. The Exchange Rangers was an organization that when a customer has a critical situation, which we define as the customer is down, we drop these people onsite and they help resolve the Exchange issue. One of the things they did is they went and took a look at every crit sit worldwide that had occurred with Exchange and asked the question, “If the customer had the knowledge and wisdom that gets delivered through System Center, what percentage of the critical situations would have been avoided?” Does anybody want to guess what the answer was? More than 50 percent of the customer down situations would have been avoided, prevented, had they been using System Center.

So, now I want to give you a view of this. What I want is to actually show you as an example, where you can see configuration changes, history and knowledge coming down to you in a proactive way from System Center Advisor. And to do that, let’s invite Brjann up onstage to do that.

Thank you.

BRJANN BREKKAN: OK, guys. This is System Center Advisor, and this is the list of all the servers that are being assessed by this service right now. I’ve installed the advisor agent on these services in our private cloud. In the private cloud, an advisor is actually providing me with additional up-to-date guidance for Windows Server, SQL Server, Hyper-V, as well as Active Directory.

So, in the Alert View, we can see the latest alerts and warnings across our servers. One of the alerts that I have in here is a Windows Server Alert affecting high I/O applications. We’ll have a slow down in performance in high I/O applications. This issue was identified by our support teams. They created a rule in Advisor because they saw this issue being affected by a lot of our customers. This rule and advisor points me directly to an updated KB article, to a knowledge-based article and to the hot fix, specifically for my servers.

If I look at the hot fix related to this article, if I look through this hot fix, I can see that this hot fix might actually be a good fit for the servers in our private cloud. What I have here is the information I need, based on those rules in Advisor that are affecting my servers, to go to my test team and validate the update, and make it part of my OS images.

What I have here is I have my servers being assessed by Advisor. But, there are other things in Advisor. Have you ever heard the words, it used to work last week, or something must have changed, or maybe I didn’t change anything, right. What you have here is, with a history view in Advisor, you have an all-up configuration view of configuration changes, including your previous value and your updated value, making life a lot easier for you in case you need to do some troubleshooting, or if you just want to win that bet with your friends over lunch, that something actually did change in the environment.

So, with System Center Advisor, your servers are being assessed based on latest and best practices knowledge and experience from Microsoft Customer Support Services. And it’s all available right now.

Thank you so much. (Applause.)

BRAD ANDERSON: How did that look? Do you see value in that? What if I were to tell you that the majority of you in this room are already going to be licensed to use this technology. (Applause.)

So, here’s how it’s going to go, if you have Software Assurance on Windows Server, for every Windows Server you have Software Assurance on, you are licensed to use System Center Advisor on that server. For every server that you have Exchange with Software Assurance, you will be authorized to use System Center Advisor on that Exchange Server. It’s all about getting this value-add, so just think of how much experience and how much knowledge comes from the 90,000-plus employees of Microsoft that would be feeding into this Azure service and proactively notifying you of issues before they arrive. It’s an incredible value. So, I’d encourage you to download the beta. The beta is available. Start using it.

OK. Now, this is probably the most difficult slide to get some concepts across to you. So, bear with me on this one. We spoke about in our cloud datacenters one of the fundamental concepts is the architecture of the application. The applications are built understanding that stuff happens, that failure is going to happen. They’re built in a way that the state of the application is separated from the operating system so there is no dependency on an operating system or on a single server. So, what I want to talk about right now is some of the innovation that we’re doing at the application layer.

So, historically you take a look at this, an app gets installed on an operating system. Many of the applications would be, what I call, this mating ritual, and they kind of embed themselves into the operating system. We want to help you build out a way that you can separate out the application from the operating system, and then take it a step further, which is enable you to leverage common operating system images across your cloud.

There’s two big reasons to do this. One, when we can separate out the application from the operating system, what that allows us to do is treat them as two separate entities. So, for example, when Patch Tuesday comes out, you can go out and update the operating system, but the application itself, that image, stays the same. You don’t have to touch it. Then what we do is when we deploy you do a real-time composition of the operating system and the application image, and that creates that server. Does that make sense?

The second big fundamental piece of this is, by enabling you to leverage a common operating system image, we dramatically reduce the number of operating system images you have to care for. Now, this is a highly virtualized world. If you virtualize 1,000 servers without using the technology I’m going to describe right now, you have 1,000 operating system images to care for and feed, and update on a regular basis.

What we’re going to show you here is by leveraging models, by being able to actually think about how you deploy a management service at the service level rather than individual components, you can reuse things like a common operating system image, and instead of having 1,000 operating system images, reduce that down into the single digit number of operating system images you have to image across your environment — incredible value, incredible things.

So, what I want to show you here is some of the innovation that we have done in Virtual Machine Manager 2012, and what we’re going to show you is how we’re going to enable you to manage at the service level, and then work that we’re doing across Microsoft, for example, in the IIS team and the SQL Team, to natively separately the application from the operating system, but then for the existing applications that all of you have, how are we going to leverage application virtualization to be able to separate your existing applications.

To do that, we’re going to bring out the leader who leads the Virtual Machine Manager Team, and the Concero project, Chris Stirrat. Let’s give him a hand. (Applause.)

Welcome, Chris.

CHRIS STIRRAT: Thank you, Brad.

Today I’m going to show you a couple of the great innovations coming in the next release of Virtual Machine Manager, VMM 2012. Innovations that allow you to manage and deploy at the service level, and innovations that allow you to reduce the number of OS images that you need to maintain. Let’s start looking at a new concept that we call a service template. A service template captures all the information that you need to deploy a service. Think of it as a recipe the VMM will use to consistently deploy that service every single time.

Service templates can be deployed to your self-service users, so you can control what they deploy and manage. And VMM automatically tracks templates, and the services that are deployed from them, and those relationships, so that we know when you update a service template, we can tell you exactly which services also need to be updated.

BRAD ANDERSON: This is a really important concept for you to understand. What you’re looking at right here is this template that is a four-tier application. I’m going to walk left to right to make sure you understand some of the components. On the left-hand side, you’ve got a Web tier. This is a combination of IIS and an operating system image. But the IIS team has already done the work to separate IIS from the operating system. On the far right, you’ve got the data tier. So, this is SQL and Windows. But, the SQL team again has already done the work that separates out SQL in its state from the operating system.

Now, those two middle tiers, your logic tier, those are existing .NET applications that have been built. They could have been built 5 years ago, 10 years ago, and what we’re going to show you here is using App V, application virtualization, which is running on tens of millions of PCs around the world; we’re going to give you the ability to separate out your existing .NET application from the operating system and then deliver some of the value I’m talking about. So, let’s dive deeper into the template.

CHRIS STIRRAT: OK. So, I’m going to go ahead and show you what makes a service template. Here are some of the building blocks. Now, if you’ve used Virtual Machine Manager before, this will look familiar to you. We have a hardware configuration that we track and maintain, an operating system configuration, but what we’re showing new here is we have a separate application configuration.

As Brad talked about, we’re using server application virtualization here. So, we can separate the operating system from the application. This allows us to manage them as two separate entities. So, you can see we’ve brought an application out here, and we actually have configuration specific to that application.

Now, the real power of this is many applications can share the same OS image now. And as Brad showed you on the slide previously, this is super powerful because you can now reduce the number of OS images that you actually need to maintain.

BRAD ANDERSON: One of the great things about this is Virtual Machine Manager will track that relationship. So, anybody who uses this template that Chris is building right here to deploy a service, if the template owner goes and says, “I’m going to update the underlying operating system building blocks because Patch Tuesday is here and we need to update that operating system.” It will actually notify the service owners of updates. Let’s take a look at how that looks.

CHRIS STIRRAT: OK. So, I’m going to change views here. What you’re looking at here is a list of deployed services in Virtual Machine Manager. Let’s take a close look at the stock trader service here, and it’s telling us some things. It’s telling us that there’s updated resources available for it. As Brad mentioned, we tracked that relationship. So, somebody has gone and updated the stock trader service template. And now we’re notifying all of the application owners, or the service owners, that they need to update their service to match the new template.

Now, think about the power of this, especially in a self-service user case. Self-service users will be notified that there’s an update available and then on their own terms, in their own maintenance Windows, they can update that application. It’s a very powerful concept.

So, in the couple of minutes I’ve had up here, we’ve talked about some great innovations coming in the next release of the product. You can manage at the service level, using our service template, you can reduce the number of OS images you need to maintain, using application virtualization, and you can empower your self-service users but still have the control you need.

BRAD ANDERSON: That’s great, Chris. Thank you.

CHRIS STIRRAT: Thank you. (Applause.)

BRAD ANDERSON: It’s a fairly complex set of topics to try to wrap your head around. If you think about what the True North is, where the compass is pointing, we need to move as an industry to the point where we’re thinking about how you manage at the service level, not at the individual VM level. Within Virtual Machine Manager 2012, if you want to continue to manage at the VM level you can do that. We’re not going to force you to move to the service. You’ll be able to do the same thing on your terms. But think about the benefits that come as you move to using these templates, as you move to managing at the service level, reduce those number of OS images, give that self-service experience to the service owner, or the service consumer, and then let them update their application with the underlying components and their new operating system, when it makes sense for them as the business owner, as the service owner to do that, if it’s making sense.

I would encourage you to go spend time with the Virtual Machine Manager team. Go into the lab; attend those sessions; really get your head wrapped around what it means to use these templates and manage at the service level. It will have dramatic impact on your efficiency and your ability to scale out and give a higher level of value to the business.

Now, let’s go to the next step in this architecture. So, we are announcing the beta of Virtual Machine Manager 2012 is available today. Go download it. This team has been on cadence, so they’ve been releasing new versions of Virtual Machine Manager on a yearly basis. And they’ve literally been on a yearly cadence. Far and away, this is the most significant version, with the most amount of work, the most amount of innovation that we’ve ever done in Virtual Machine Manager. So, I really encourage you to go get that, and give us some feedback as you test that.

OK. So, now we’ve taken your existing diverse infrastructure, built a fabric on top of that, created a cloud, and now allowed you to start, enabled you to start deploying apps on common operating system images in that cloud. With that, you now have confidence to start doing things like guaranteeing an SLA, and SLA in our environments is key. As we move more and more of our work to a service, whether that be Exchange Online, or SharePoint Online, Windows InTune, Office 365, as we work with customers on these we actually guarantee a service level with dollars.

If we don’t meet the service level that we guarantee you as we sign up in this partnership for you to consume these services, we actually, you know, aren’t going to expect you to pay us if we don’t meet the service levels. And, at the end of the day, again, we deploy these infrastructures, we deploy cloud, we virtualize, it’s all about the app.

And so, we’ve been doing a lot of work in Ops Manager 2012 in the area of dashboarding to make it just super easy for you to create dashboards that show you what the service level is, but then how many of you have had a chance to take a look at the acquisition we announced last fall of AVIcode? Let me tell you where this kind of hit home for me when the team brought it and said, “Listen, Brad, we should go acquire this technology.” How many of you have Xbox and Xbox Live? I have two boys 12 and 8, and they live on “Halo,” and they live on the shoot-up games.

The thing that brought it home for me was when one of these big titles was released last year, and we were expecting a massive spike of Xbox Live users, we actually went over and watched how the Xbox team monitored that and was managing this massive increase that was expected as people bought that game on the first day, and were trying it online. And you can imagine we saw a massive spike, many, many times what the average daily users of Xbox Live are from a concurrent basis. And what were they using to manage and monitor that? They were using Ops Manager and AVIcode.

We’ve talked for a number of years with you about this vision of dynamic IT, and last year Bob was here and talked about we’ve delivered dynamic IT. And as I thought about AVIcode, and I thought about what you’re going to see here in a couple of minutes, and how simple it makes it for us as IT professionals to quickly get down and diagnose an issue, and get to the root issue. I thought about that relationship between dev ops and IT and how this can dramatically simplify this as another example of the work that we’re doing to bring development and IT closer together.

So, what you’re going to see now is a demonstration of the work that we’re doing around dashboarding, and how all of you can start to use AVIcode. And, mentally, as you think about AVIcode, if you have App Manager, if you own Ops Manager, which most of you do, you’re going to be able to take advantage of the AVIcode capabilities as we continue to integrate this into System Center.

But let’s show you how this is going to dramatically simplify our lives. Again, my comment about “It’s all about the app” and “Microsoft brings wisdom to your cloud,” let’s invite Shilpa out to give you a demonstration of this. Let’s give her a hand. (Applause.)

SHILPA RANGANATHAN: Hi, Brad. Thank you.

Good morning. Today you’re going to see for the very first time how Operations Manager and AVIcode will empower you to gain deep insight into the health of your application, and reduce meantime to repair.

Prior to coming to Microsoft, I used to work at a large online travel company. Being responsible for the performance and availability of the application, I would constantly worry about customers experiencing issues. What I really wanted to do was to proactively address these performance issues before my customers ever saw them. In today’s demo, you’ll see just how easy it is to achieve this with Operations Manager and AVIcode.

So, let’s dive right in. I’m in the Operations Manager console today. We have a great new dashboard that enables me to see the health of my Stock Trader applications. To the top left, I have right insight into the various components of the applications, alongside a map view that gives me health perspectives from different locations across the world, and charts for service level objectives and performance at the bottom. What I want to draw your attention to is the alert that got generated showing me that client performance might be slow.

Now, this alert was generated using AVIcode’s client experience monitoring functionality. What it really does is it tells me that there is less-than-desirable performance for my end users interacting with Stock Trader. So, I’m going to use AVIcode to now drill deeper and figure out what is wrong here.

So, clicking on this, I do get a view here that shows me the entire time it took for the page to load. Now, this is the time from when the customer clicks on the mouse to the time the entire page was actually rendered. The colored bar graph here, you see, shows me that the largest amount of latency came from the server response time. Now, most of you probably go through this every morning. You look at an issue, and you say, “Well, my instinct tells me that this is likely in the developer’s code.” But my developer, of course, will disagree.

You probably have these conversations with your development team over and over again. Developer comes back saying, I need a repro of the problem. The problem is not in my code. The issue is with your configuration. We all know how stressful and time-consuming these conversations can be. And so, what do I really do to be able to actually get to the root cause of this issue within a matter of minutes, so I can escalate to my developer with a high degree of confidence?

With AVIcode, I have the ability to drill deeper. And if you look at the view here, I can see that 96.64 percent of the time was spent in the database. Not just that, I can drill even deeper to the exact stored procedure, the execution of which caused this performance issue. Huge, huge from a development standpoint. (Applause.)

So, what took you days and weeks in the past can take you minutes because with AVIcode’s collaborative features, you can forward this information to a developer directly, and the best part of it, the developer does not even need AVIcode installed to look at this information.

The important thing to notice here is also that you will never need to provide a live debugging environment to your developer. Your developer will not need to monitor or even instrument or change the application for you to leverage this functionality. As a monitoring expert in your organization, you just got empowered to eliminate all the finger pointing, gaining deep visibility into the health of your application, determine root cause in a matter of minutes, and finally gain a ton of credibility with your development team.

Finally, if your manager is anything like mine, he or she is always asking to view the health of the application in a view outside of the Operations Manager console. So, let’s take a look here. What I’ve done here is leveraged the powerful features of Operations Manager along with those of SharePoint, and I’ve published the dashboard we saw earlier to SharePoint and given access to my manager so that he has the right level of visibility to my application in a single view. And he did not need to log into the Operations Manager console to do this.

Thank you. (Applause.)

BRAD ANDERSON: Hey, isn’t that demo pretty cool with how AVIcode takes your existing code, and you can actually get it down to the stored procedure in a SQL call where the issue is. In Operations Manager 2012, you’ll see that first level of integration with the AVIcode acquisition. Just expect us to just continue to deeply and more deeply integrate that in.

Coming back to this architecture slide that we’ve had. Let me just walk through what we’ve seen over the last hour. We started with the ability to create a fabric using the device infrastructure. Actually, let me take that back. We started with the self-service portals with Jeremy playing the role of the service consumer needing to get additional capacity because he was out of capacity in the cloud in which he wanted to deploy a new service.

We then walked you through the innovations that we’ve done in Virtual Machine Manager about how easy it is to go and add that capacity, and using that same interface you could have gone and created an entirely new cloud if you wanted to.

We showed you some of the innovations that we’re doing in Operations Manager that allows you to monitor that fabric. So monitoring the storage, and the network, and the compute. And one of the pieces I’ll point out here is, through partnerships that we have across the industry with a number of partners, you can also monitor the health of the fabric even if it’s not created with Hyper-V.

One of the most interesting things that we’re seeing right now is customers using Operations Manager to monitor the health of their VMware environment. And our view of this is, as we help you create that fabric on which you’re going to create your cloud, we believe that you’re going to be hybrid, and you should make the choice of what’s better for you, whether that be running that cloud in your datacenter, or in a partner’s datacenter, whether that be running on Hyper-V or another hypervisor, we will give you the tools that allow you to make the decisions that are best for your organization, and then we’ll help you create that fabric given the decision you’ve made.

We then walked you through innovations that we’re doing using application virtualization, and the work that we’re doing across Microsoft to separate out the apps from the operating system, and the value that that brings. Shilpa just walked you through some of the innovation in terms of dashboarding, and the integration of that wonderful technology from AVIcode.

So, we started out talking about these two roles, a service provider and a service consumer. The majority of the time that we’ve spent today is showing you all the tools that that service provider will use to define and build these clouds, delegate whatever set of rights they want to give to the service consumer.

So, now let’s end where we started at, and let’s go back and take a look now in Project Concero at this self-service portal for the service consumer and see how the work that’s been done throughout the day of adding new capacity is going to be shown, and we’ll show you how simple it is to deploy your new service.

To do that, let’s welcome Jeremy back out. Give him a hand. (Applause.)

JEREMY WINTER: Great to see you again, Brad.

OK. Let’s close the loop. We’ve jumped back into Concero and deployed that application now that I have that capacity in the finance cloud I requested earlier. And before I can do that application deployment, I need to select a service template, which you saw also earlier. And as my service template flowed in, I’m going to look for that latest version of Stock Trader, Stock Trader 2.0.

Now, as I select this, Stock Trader will load with a predefined configuration settings that we already set with this, and that really simplifies my ability to do the deployment that I’m needed to do across the different clouds. Also, the service template will ensure that I’m compliant with the latest IT standards and policies.

Now, let’s get this thing going. As I select, I’m going to use the default settings, and I’m going to go ahead and call this my Stock Trader. I’m going to use high priority to get this to execute right now for us today. I’m going to finish that, start the deployment. And you can see in the bottom right-hand side, the deployment has already started.

Now, again, that key point about self-service is, we enable you to get going for those application owners with Concero. Thank you so much, everyone, and I really appreciate your day. Enjoy your MMS show. (Applause.)

BRAD ANDERSON: Thanks, Jeremy.

Let me tell you a little bit more about Project Concero. So the way that we’ve architected this, Project Concero sits on top of Virtual Machine Manager. We’re going to be releasing Project Concero on more of a cloud cadence. So, we’re going to be updating this multiple times during a given year, and we’ll post those updates out. You can download that, and then it’s just literally dropped on top of your existing System Center infrastructure.

And imagine where we’ll be able to take this now. Jeremy went up there, and he showed you this view of the service. And you see the four tiers. Start to imagine us taking the Operations Manager health information and servicing that up in that self-service portal. So, you can understand the health of the service, the health of the different components. But for that service consumer, Project Concero becomes that self-service portal where we will surface up all the information that they need to do their job within the constraints that the IT provider, the service provider, gives them.

So, remember I talked about in the beginning of the session about how many of you, as you think about the cloud, there’s angst that comes with it. But the reality is, if you don’t make it simple, if you don’t make it easy for your service consumers to consume capacity from you, they will go around you. Using what we’ve shown you today, we’re giving you the tools to create the cloud, put rules in, put policies in, delegate control and access to the service owners, and let them then run their business, run their applications within the constraints, but it gives the service provider, the infrastructure people, assurance about compliance, about cost, about predictability. You’re able to manage the VM sprawl. And so these two roles are really important to understand if you think about you’re going to build out your private cloud.

Now, in summary, what I would talk to you about is, there are three things that I hope you walk away from today’s keynote with respect to the innovations that Microsoft is doing to enable you to build private cloud.

The first one is it’s all about enabling you to embrace the cloud. Again, the cloud is a compute model on your terms. We fundamentally believe that the majority of you will be running in a hybrid environment, consuming cloud capacity from your datacenters, from your partner’s datacenters. You’ll be able to choose which hyper-visor you want to run that on.

You can actually have these services that Jeremy showed, with maybe your Web front end running on Windows Azure, for a scale-out architecture, but maybe the data, because of compliance reasons, whatever the case may be, running in your datacenters, but having a hybrid service. We’re going to give you the tools that allow you to look at all of your clouds, and all of the resources that you want to use to build those, and have a very consistent and a simple experience to build that.

Two, Microsoft workloads run best on the Microsoft virtualization infrastructure. We work across the company to optimize, so that SQL runs on Hyper-V better than any solution on the market. Exchange, SharePoint, go through the list, and then we bring wisdom to that for you. Ultimately, it is all about that application. So, the wisdom that gets created in the form of, say, a management pack, and the advice that comes down through System Center Advisor, that’s coming down from the engineering organization, from the consulting organization, from the services organization, we will deliver you knowledge that helps you ensure that your application and the service level that you’re guaranteeing is going to be there. And that’s one of the promises you get when you partner with Microsoft to do this.

And then finally, that third point, just to reinforce, System Center is really the piece that builds that private cloud for you. When customers ask me, “What’s the difference between a highly virtualized datacenter and a private cloud?” It’s the increase in the management and the process and the expertise.

What I hope you’ve seen over the last 75 minutes is Microsoft is making that private cloud very approachable for you, very easy to use, very easy to build, very easy to deploy, using the tools and the capabilities that all of you are already using. System Center is the most widely used management solution on the face of the earth, and that’s the interface that you use to build out these private clouds.

Ultimately, this is about empowering you, you as individuals, you as an IT organization, you and your personal careers, to increase the value that you bring to your businesses, and differentiate your businesses.

Have a great day today. Like I said, we are incredibly grateful for your willingness to come and spend an entire week with us. We’ve never been more excited to be here and show you some of the innovation that we’re delivering than we’ve ever been before. Have a great day. Have a great night. Don’t say up to late. Come back in here tomorrow at 8:30, and we’ll spend the entire day talking about how you can empower others, and how you as an IT organization can embrace consumerization of IT and some of the innovation that we’re doing on, actually, the end-point enablement, and the end-point management.

Thank you so much. Have a great day.

(Applause.)

END