Bob Muglia: Microsoft Management Summit 2010 Keynote

Microsoft Management Summit 2010 Keynote
Remarks by Bob Muglia, President, Server and Tools Business
Las Vegas, Nevada
April 20, 2010

ANNOUNCER: Ladies and gentlemen, please welcome Corporate Vice President for the Microsoft Corporation, Brad Anderson. (Applause.)

BRAD ANDERSON: All right. Hey, let’s go. Welcome, everybody. Welcome to Vegas, welcome to the Management Summit. You know, we are incredibly excited to be here with you and to hear from you and to hear what we’re doing well, and where we can improve and learn from all of you.

In so many ways, I think this is the most significant Management Summit we’ve ever had. In terms of what’s happening in the industry, I think we’re at a historic time with the industry if you think about the cloud, the benefits, and how we’re going to start move and take advantage fully of cloud. In terms of innovation, you know, we have never delivered as many products and as much innovation as we’re going to see today and tomorrow and throughout the conference. Finally, I think that in terms of the amount of energy and the amount of excitement around management, it’s unparalleled.

Now, in terms of how we think about things and how we’re progressing, you know, in terms of the parts that we’re delivering, one of the things that you will see is we’re delivering Dynamic IT. And a big part of what Bob’s going to talk about with you this morning is all that delivery and how we connect all the pieces in our Dynamic IT vision and set the pace for the cloud.

Now, talking about the cloud for a minute, you know, our friends in Iceland wanted to make sure that they were a big part of the cloud, which introduced a number of challenges for a lot of people trying to make it here from Europe. And there are some extraordinary stories about some of you who spent 50 hours on planes, trains, and automobiles to get here.

Let me tell you a little bit more about all of you as far as some of the interesting data points about the attendees. Three out of four of you are using Configuration Manager. And of that group, 80 percent are using R2.

Two out of three of you are using Operations Manager, over 80 percent on the R2 product as well. Interesting data point about that, 50 percent of all the Ops Manager customers are using the cross-platform capabilities right now to manage across Windows and across other platforms.

Over half of you are using Virtual Machine Manager. Over 25 percent of you are using App-V today in production. And one of the things that I think is incredible is 10 percent of you have downloaded the beta of Service Manager and have put that through the paces.

So to all of you, I want to thank you on behalf of all of Microsoft, and especially to the teams that work on these products, for giving us the opportunity to partner with you in your business. For those of you who are here for the first time, here to learn about what we do in System Center and across all of our management technologies. We want to earn the right to partner with you and your business.

Thanks for coming and let’s get started. (Applause.)

(Break for video segment.)

ANNOUNCER: Ladies and gentlemen, please welcome President, Server and Tools Business, for the Microsoft Corporation, Bob Muglia. (Applause.)

BOB MUGLIA: Well, good morning. Good morning and welcome to MMS. I am really glad to be back here at MMS. This is a great show, it’s a great opportunity for us to all work together and learn from each other. In fact, we have been on a journey together. That’s really what my talk is all about is the journey that we’ve had together. And at this show, there’s a great opportunity for us to learn from each other. You get a chance to hear from the people who write the products and build the products that you use every day. Our partners are here in force, and they’re so important and critical in completing the solutions that you use to drive your business. And from our perspective, we get a chance to hear from all of you. And it’s that input from you that allows us to do what we need to do to solve your problems.

So we’ve been on a journey together. And I want to sort of take you back for a few years and sort of start by talking about one very important milestone on that journey which happened seven years ago in 2003. 2003 was when we first announced the Dynamics Systems Initiative, which was the beginning of our path to Dynamic IT. We said it was a 10-year vision, and we’ve been working on it for seven years.

And if we think back — just think back to 2003. System Center did not exist. SMS 2003 had not yet shipped. MOM, MOM 2000 was pretty early. It was pretty rough back in those days. An awful lot has changed, but it’s changed because of the input and the feedback that all of you have given us.

So if we look at that journey, that journey to Dynamic IT, the thing that I’m so excited about, and what this show is very much all about, is how we’re delivering together on Dynamic IT. That destination, that vision that in 2003 seemed so distant is just ahead of us. We can see the fruition of all of that work that’s been done together.

And looking back on what we’ve been talking about with Dynamic IT, the core components of IT, Dynamic IT, they’re more relevant today in 2010 than they’ve ever been. The idea of thinking about the business and managing the IT of the business as a service, everything about how we think about delivering services to the business units and to our customers, even to end users, how does that — that’s more critical today than it ever was. Connecting the development process through to operations, to test and operations. Thinking about that as a consistent, coherent process. All of which connected together by the system driven by models is more important, and in fact, going forward to the future, that idea of model-driven which you’ll hear about again and again in my talk this morning is more important than ever into the future.

Thinking about unifying and virtualizing the entire environment, from the top to the bottom, thinking about the way the hardware works together with the virtualization subsystem, together with the platform layers, together with the applications and the management and operations system, thinking about all of that cohesively and automating every step of that wherever possible, that’s critical.

And something that’s certainly clear to all of us is the importance of the end user and the way the end user has become much more sophisticated, much more willing to work with technology, use multiple devices and deliver experiences to solve their problem. Whether that’s a consumer or part of the business, enabling the IT solutions for end users is critical.

So the vision that we’ve been working on for Dynamic IT is more critical today than it ever is in the future, and I’m going to talk a little bit about how that’s actually coming to reality through the products that we’re shipping now or will be shipping shortly in the future.

Let me talk about something that has been present throughout the whole concept of Dynamic IT, back to the day we announced the Dynamic Systems Initiative, and that’s this idea of building applications that are designed for operations and connecting the process from the point of requirements definition of an application all the way through architectural design, development, validation, staging into production, and ongoing operations. Connecting that entire work flow.

I’ve talked about that several times here at this conference. Connecting that work flow between the developer and the IT operations person. And it’s certainly critical and there is a huge amount of opportunity to simplify the process, to reduce cost, and most importantly, to get products into market, business solutions into market faster than ever before.

At the core of this, there’s no question, is the model, the idea of having a well-defined model that defines the business applications, all of its components, its sub components, and the relationships between those. So, the model is key.

Now, one thing we’ve learned, and we’ve heard it from you, you know, I think you learn things along the way. We didn’t know everything back in 2003 and 2004, but one of the things we learned is that when we drew this picture — and I think I first put it up in 2004 and 2005. When we first put this picture up, the arrow went like this. The idea was is the developer created the application and moved it through the process into operations, and really the developer plays a very key role in this.

But one of the things we’ve learned is that the truth lives in the datacenter and that the model is actually defined within the operational system. And the evolution of the model is predominately done in the evolution — inside the datacenter and the operational system.

If you look at the work that’s being done to define models today, they’re being done predominately by IT operations. Back at Microsoft, where we’re rapidly defining the next generation of models for our products on a go-forward basis, the next versions of System Center, the next versions of Visual Studio, we’re working on that right now. We shipped Visual Studio last week, we’re already onto the next one, and we’re very actively working on the next version of System Center.

Back at Microsoft, the model definition is being done by the team, by Brad Anderson’s team, by the System Center management team, because we’ve learned that truth exists in the datacenter, and it’s updated and maintained in the datacenter with development playing a key role in that evolution. So, it’s a key thing.

And this connection between the developer all the way through to operations is as important as ever before. We are, in fact, delivering as a part of our Dynamic IT vision some very, very key parts of that model right now, and the part of that system right now. There are products that are in market today that are delivering on what we talked about back in 2003-2004. And what I’d like to do now is to show you how we’re connecting that developer to operations and thinking about service management holistically, delivering on a key piece of Dynamic IT.

What I’d like to do is invite Sacha Dawes up to show up to show us some of that in action. Sacha. (Applause.) Good morning.

SACHA DAWES: Good morning. Good morning, everyone. So, what I’m going to show you today and what you’re going to see is a day in the life of an operations analyst working in the datacenter. So, basically in the operations sphere, I’m responsible for making sure the applications are running, our line-of-business applications, our customer portals. What we have today is we’re monitoring through Operations Manager, part of System Center, our Contoso applications. So, it’s basically for Contoso Electronics, where our customers come in and order products.

So we’re monitoring the databases, the Web store and Web sites, they’re all running as expected. We even have a synthetic transaction on the bottom left-hand side running here in Las Vegas that’s actually performing some synthetic transactions with the Web site to make sure that everything is logged, and that users are happy.

Now, the development team is actually working on the next release of this product, and everyone is really excited. Our marketing team is working to get it out. The problem is, there’s a lot of internal pressure. Effectively, if something does go wrong, we run that risk that we’re going to end up in a crossfire between operations to find out what went wrong, who needs to fix it, and really who’s to blame.

So at Contoso, we now have leveraged the power of System Center with Visual Studio to really align the operations and the processes from development into test and operations. I’m just going to show you a little bit about that.

I’m actually going to connect into my pre-staging environment. Now, here I have access to a something that our development teams knows and loves and really trusts. What you’re looking at here is Visual Studio 2010. And Visual Studio 2010 has a very tight integration with System Center and Virtual Machine Manager.

BOB MUGLIA: To be clear, this is the lab management product, which is new in Visual Studio 2010. And one of the attributes of it is it provides a very complete test environment for the test part of development to be able to test applications that are being built. And the infrastructure that Visual Studio 2010 provides allows you to deploy your application in a test lab on a virtualized infrastructure using the combination of Hyper-V, and in fact, underneath there is actually a subset of Virtual Machine Manager to manage this environment for the developers.

Now, it’s useful for the testers to run this, but it’s also interesting to be used as a test environment for operations to validate that the application works the way it should before it gets staged into production.

SACHA DAWES: And that’s exactly what we’re seeing here. We’ve got our next revision of the Contoso Electronics Environment. We actually have a server that’s running our Web applications and our database. And I’m going to switch over to the testing center where I can actually perform a number of test plans.

I have some that are automated that have been agreed between myself and operations and the development team. And I have a manual test. I’m going to run through one of these manual tests here. And what’s happening now is that Visual Studio is effectively started up a number of diagnostics, it’s going to record exactly what I do within the environment. And in the back end, this actually kicks off a new feature in Visual Studio called Intellitrace. Intellitrace is gathering all the intelligence of the application, it’s monitoring the instrumentation, gathering all the cores, et cetera. So, you know, everything is being, as I actually run through these test plans, it’s going to be gathering all that information.

So I’m going to run through. I just clicked on component parts. I’m going to click on add to cart, that came up. Next thing I have to do is increase the quantity to 10 and update the shopping cart.

Now, it says over here that I should get a 10-percent discount. Now, unless my math isn’t correct, Bob, I don’t see that discount applied there. So, I’m going to come out — I’m actually going to mark that as a failure. I’m going to put in a comment for the development team that says “discount not applied.” I’m going to take a snapshot of what I can see right here, so a full screen shot that’s going to get actually attached into the test plan right here. I can even take a snapshot of the virtual environment that I’m running against and actually attach that as well.

BOB MUGLIA: Yeah, and when Sacha does this, there’s a lot of information that goes back to the development team. They get a bug report, which has all of this information in it, as you see here, and perhaps most importantly, the actual running environment is captured through Intellitrace, and the developers can actually go in and debug the session exactly as it was running by the tester, or by the person who was doing the validation prior to staging it into operations. So, they get to actually step backwards and forwards through the application to understand exactly what went wrong to debug and fix the problem. So, getting rid of that normal repro case.

SACHA DAWES: Exactly. And that’s exactly what we’re seeing here. I’m just running one of the videos. And that’s going to get sent to the development team for them to help really analyze what’s going on. So, I just come out of there, I’m going to assign that ticket out into the environment. I’ll send that off to you, Bob.

BOB MUGLIA: I’ll work on that in my spare time.

SACHA DAWES: Great. So, that’s going off to the development team and they’ll go ahead and get that work done.

BOB MUGLIA: I’m a pretty fast worker though, aren’t I?

SACHA DAWES: Absolutely. In fact, already the bug is fixed. Thanks for doing that, Bob. Did you see his hands moving?

So what we have here, we’re back in Operations Manager and we’re actually looking at the new release. We actually have the new service model that we’re going to be monitoring when it moves into production. Now, we have to actually move the virtual machine so everything’s been tested in the background, we’re all ready to go.

We need to move those virtual machines into production. To do that, I’m going to use a new product that is part of the System Center Server Management Suite, which is actually going to run the orchestration.

BOB MUGLIA: So what we’re doing here is taking — you can look at this process as, OK, the developer fixed the bug, it was restaged back into the virtualized test environment, and it was in fact validated that the system works as it should work. What we now need to do is stage this out of test and into a staging server, run a quick set of validations on it to make sure that it’s running in the production environment on the staging side, and then switch that over into production. That’s an orchestration process, that’s a set of steps that must be orchestrated. And classically, those steps which might consist of many machines being updated, many heterogeneous systems, perhaps a set of PowerShell scripts to automate parts of it, typically, those sets of steps have to be done one at a time.

What we’ve got is an orchestration service that enables you to take a re-run things on an orchestrated basis, not just within the Microsoft environment, but also connecting to heterogeneous systems, Oracle, UNIX, Linux systems and staging orchestration across those environments as well. Opalis is now part of the Microsoft Datacenter suite. So, if you own Datacenter, you already own Opalis, and this is something that you can take advantage of today.

SACHA DAWES: Yeah, you can see on the bottom side, as Bob said, all those tasks I had to perform manually whether I used Virtual Machine Manager to move those VMs and running a script as well to perform that integration. You can see exactly what’s happening in the bottom part of the screen. Of course you wouldn’t normally watch this, but you can see this is pushing the VMs into staging, it’s going to validate those VMs, clone them into production, run some tests to make sure the users will be able to connect into that environment, and then switch over and do the load balancing on the back end. So, ultimately, we’ve got that end-to-end process right in front of our eyes right here.

BOB MUGLIA: Great. Thank you very much, Sacha.

SACHA DAWES: Thank you, Bob. (Applause.)

BOB MUGLIA: So thinking about the overall business process as a service, connecting development all the way through to all of the stages of operations and the ongoing life cycle of an application. Making that connection between the development team and the operations team much more seamless and much more productive. Capturing the information that’s necessary as a part of the model, as a part of the process of developing software and releasing it into operations, capturing that in business systems that are stored and are available to reduce the amount of effort that people have to spend and cycles that are spent resending things back.

Taking automation to the next level, enabling you to stage your process across multiple systems within your organization, Microsoft systems and heterogeneous systems, and in the future, cloud and hosted-based systems. So, this sort of connection is a key part of what we’ve been focusing on for these past seven years with Dynamic IT.

And thinking about that, it’s really that overall stack that is so important, really starting with deep relationships with all of the different hardware that you use. We work every day with all of the hardware vendors that you are working with every day as well and we’re working to make sure our products work as seamlessly as possible with that hardware and perform as well as possible on that hardware.

Having virtualization built in, very high-performance, mission-critical-capable virtualization built into the system from the ground up. Thinking about how the operating system, the application infrastructure, the database, all of those systems evolve to enable you to reduce the time to solution and be able to reduce your overall costs, allowing you to get things done faster and easier than you’ve ever done before.

And then thinking about management in a very holistic way, really coming from the perspective of delivering IT as a service to your organization and enabling the entire process from the beginning to the end. You can sort of think of System Center as evolving into the ERP system for IT, an overall application to manage your IT process from the very beginning all the way through to the end regardless of where it exists today in your own datacenters, or tomorrow in a cloud-hosted world. So, physical, virtual, applications all integrated together.

And thinking about how all of those pieces connect and how we can manage this much, much more as a delivered service, what I’d like to do is invite Kenon Owens up to show you something that is quite unique to System Center and Hyper-V and that’s a long-distance live migration, but also very importantly, thinking about the process behind what it takes to do a mission-critical operation like this. Kenon, come on up. (Applause.) Good morning.

KENON OWENS: Good morning, Bob. Well, here at Contoso, we are a growing company. And the Web application that Sacha’s team has developed has really taken off. In fact, it’s done so well that we’re outgrowing the capacity of our downtown Chicago datacenter.

Through this System Center service map, what we can see is that the customer experience for this Web store application isn’t up to par, and the real solution is that we need to move the application to a new datacenter.

So what we’ve done is we’ve contracted with a hosting provider to build the incremental capacity that we need, and we’re going to move the application to that datacenter.

BOB MUGLIA: So what you see here really is a view within Operations Manager of a model of both the current datacenter environment as well as the new datacenter environment, and that model is a critical part of understanding how to orchestrate and drive this overall process. And once again, the model was designed within operations because truth lives in the datacenter.

KENON OWENS: Exactly. And we’re actually rendering this Visio diagram through SharePoint 2010 to show the status of what’s happening with the system. So, we’re moving this application — you know how difficult it is to move an application, you’ve got to make sure that the network is correct, the infrastructure is set up, as well, you’re going to probably have some down time when you move that application. What we’re going to show you today is how to move an application from one datacenter to a completely separate datacenter with no down time to that application.

Within System Center, we have created a change request for this datacenter move. Now, a change request, obviously, has a lot of activities associated with it, and within these activities, we’re part way through, already done through most of the approval process getting ready for the move.

BOB MUGLIA: So in thinking about managing the process associated with something as significant as taking a key set of your applications and moving them from one datacenter to another, that’s a complicated process and it has to be managed across many different people within the IT organization. With our new Service Manager product, that entire process, that whole human work flow part of that process can be fully automated and the task can be fully assigned.

You know, we can see here we have an initial change request that came in because of the capacity requirements that were necessary, the network needed to be reviewed and set up and in order to enable it. We needed to get the storage subsystem set up so that it was fully replicated between the two datacenters to enable the migration seamlessly. And we have to have a cluster set up. One of the features in Windows Server 2008 R2 is the ability to support multi-site clusters with automatic fail-over, as well as staged migration, live staged migration, in fact, between those two different datacenters that are connected within a cluster.

So all of these sorts of stages needed to be set up and managed across different people within the IT organization. And that entire process, all of that, making sure that IT continues to be delivered as a service and the service level is supplied is what Service Manager does.

Now, one of the other important things that’s necessary before you move an application from one datacenter to another is to make sure that it’s fully backed up. And what we’re doing here is using Data Protection Manager to do that. Data Protection Manager has really been designed precisely for this set of functions to help you back up and manage virtualized instances of your applications, including the files and user items within those virtualized instances.

So managing and fully backing up a clustered system, as well as the files within it is what Data Protection Manager provides. So, we’ve got all those things ready, looks like we’re ready to do the live migration.

KENON OWENS: We’re ready to do the live migration. So, we’ve created a task here which will orchestrate the migration. So, when we move this service or other services over to the new datacenter.

So we click off the task, and now we start migrating the application through the datacenter. We set up this stretch cluster. And you’ve seen live migration before where we have two systems within the same site where we’ve migrated the application. What we’re doing now is with multi-site clustering is migrating the application from one datacenter to the other, leveraging our partnership with HP where they have added the cluster extensions within our Microsoft clustering so that we can do this migration.

BOB MUGLIA: System Center and Hyper-V are the only products in the market that enable a long-distance live migration that fully coordinates the movement of the virtual machine and the underlying storage system. And in this case, we’re partnering with HP to provide an underlying storage system as well as the storage replication. We work with other storage partners as well, but obviously we have a very strong partnership with HP. In January, we announced a very broad relationship with HP where we’re working with them in virtualization and management as well as in database and messaging, really focusing on providing consistent solutions to you. It’s just a great example of how Microsoft is working together with partners in the industry to supply you with what you need, including groundbreaking, mission-critical capabilities that no other system can provide, like long-distance, fully coordinated live migration.

KENON OWENS: Thanks, Bob.

BOB MUGLIA: Great, thanks a lot, Kenon. (Applause.)

So once again, thinking about IT as a service delivering a service to customers, coordinating all of that, building our tools such as Service Manager and Opalis to enable the orchestration of that. All around in the datacenter, every piece of it, including backup, all of those things are what we’re focusing on with System Center to really deliver on that vision of Dynamic IT.

There’s a part of it that I’m very pleased to announce that today we’re announcing the release of both System Center Data Protection Manager 2010, as well as Service Manager 2010. These are both great products that we feel — (Applause.)

Data Protection Manager, this is I think the third or fourth version of Data Protection Manager, and we’ve been learning from you, and we really now think we’ve nailed the mission-critical needs of an enterprise and managing cluster high-availability systems.

Service Manager is a product that’s been a long time coming, and it has evolved along the way based on your feedback, so we feel very good about what this first product can do, and we look forward to compliance management. We see this product as the centerpiece of what we’re going to be delivering, and we’re very excited about what it will do.

Dynamic IT, it’s been a ten-year vision since 2003. We’ve been on the road together, we’ve been on this journey together for seven years of that ten years. What I’m really pleased to see is with the products that we’re shipping this year, and then the products that are coming, we are delivering on that vision. That vision is becoming a reality. So, what we’ve talked about with Dynamic IT, we’ve been in this together, it is becoming very real. Today what I showed you is service enabled, that part of Dynamic IT, process-led, model-driven, and unified and virtualized. Tomorrow, Brad is going to get up and talk about the client side, and he will put much more emphasis on the user-focused part, which is also being delivered.

Now, Dynamic IT is said to be a 10-year vision. That doesn’t mean that these things come to an end at the end of seven, eight, nine, 10 years. We’ll get feedback from you for many, many years to come, and we’ll continue to evolve on this. But it is great to see that with your help that we’ve been able to deliver on this vision of Dynamic IT.

OK, so seven years ago we talked about Dynamic IT. What’s next as we go forward? It’s interesting, we’re delivering on this, what’s the next thing on the list? Well, I think there is no question whatsoever that the next vision, the next step forward is really the cloud, how we take all the learning that we’ve done together with Dynamic IT and bring it forward into a cloud-based world.

And if we look back, the things we’ve been talking about with Dynamic IT look pretty interesting into the future. Now, I’m not going to say that we’ve predicted the cloud in 2003, because we didn’t. But what I will say is that the steps that we’ve taken together and the things that we’ve learned together, the parts of Dynamic IT — service-enabled, process-led, model-driven, unified and virtualized and user-focused — all four of those attributes of Dynamic IT apply critically to the cloud. In fact, you have to get those right in order to deliver on the cloud mission.

So we’re really launching today — I mean, today is really the beginning of the launch in some senses of the next stage forward, which is this focus on the cloud. And I’ll say that’s a 10-year vision as well, although it will come even faster than what we’ve seen happen with Dynamic IT.

So the first question: What is the cloud? Well, in its most basic, simplistic form, there are many attributes you can talk about with the cloud, but in its simplest form, the cloud is just-in-time provisioning and scaling of services on shared hardware. That’s the most straightforward definition that we can come up with in the cloud.

Now, as I said, there are many, many attributes that surround us and that help to deliver on the full vision of the cloud, but in the most broad sense, it’s just-in-time provisioning and scaling of resources, services, on shared hardware. And what we can do with the cloud is accelerate the speed of solution delivery and lower the cost of IT. That’s the promise. It’s faster time to market for applications, and fairly dramatically lower cost associated with delivery.

As I said, all of the work that you’ve been doing with us these past number of years all apply to the cloud. It’s not like you have to throw everything away and re-learn it. We’ll take you forward with it. The effort that you’ve been putting in, all of the systems you’ve been building, those are all relevant as we move forward to the cloud, but the cloud will enable a broad set of new things that you can deliver to your customers and to your business, and we think some pretty exciting stuff will come from that.

Now often people have a conversation about public and private cloud. We know about examples of public cloud, Windows Azure is an example of a public cloud, Amazon’s AWS is an example of a public cloud. A number of hosting providers are providing what is often described as a public cloud. And yet, there are valid concerns about the nascence of those environments for running some very key IT systems. And so people want to take and learn from and get the benefits that are being delivered in the public cloud environment and be able to run it within their own datacenter, and that’s frequently called a private cloud.

Now, these definitions are well known in the industry, but one of the things I found when we fought through what it really means is that definitions themselves are somewhat insufficient to really describe what needs to be done and the choices that are available to you as you think about moving your IT systems and your business applications forward.

The road to the cloud actually provides a number of different choices and a number of different opportunities for you to pursue, and I think it’s worth clicking down a level to understand that more.

And so at a lower level — talk about public and private clouds at a high level, but at a lower level, when we think about the clouds, there are really two different dimensions we think of. We think of shared cloud where the cloud environment is running services for multiple businesses, multiple organizations, and we think of dedicated environments where a set of machines is dedicated to the IT needs of a single company, a single organization. And when we think about those datacenter environments, we recognize that there are instances of those that will exist within customer-built datacenters, within hoster or partner-based datacenter, as well as within Microsoft and the Microsoft datacenter. I’ll talk a little more about what we’re doing with the Microsoft datacenter, with Windows Azure and how we’re learning from that. But those instances exist across all three.

And when you actually look at this matrix, every box, I believe, is checked. Now, you might say, OK, wait a minute, I’m running my own cloud in my own datacenter, isn’t that dedicated to me? It probably is, but there are instances where customers have wanted to run their own shared cloud. The financial organizations want to provide their financial products as a service to other big businesses, and they would like to run that within their own cloud environment, the private cloud, but in that sense, it’s shared. So, whether it’s auto companies that want to have dealer networks where they’re providing services to syndicated dealers, that’s an example of a shared customer cloud.

Partners will clearly provide both, hosters will provide a broad set of services, and we believe that hosters play a very important role going forward, and they will provide a broad set of services both for customers who want to participate as part of a large shared cloud with many companies, as well as customers that want dedicated clouds to their organization.

And in Microsoft with Windows Azure, today we’ve only announced a shared cloud, that’s all we’ve announced with Windows Azure is a shared cloud. But we will wind up doing dedicated clouds at least for government organizations who want to have a dedicated environment to them, with some special requirements.

So all of these — all cells of this matrix are interesting. And from our perspective, we think that in order to deliver on this fully, we need to learn from our customers, work closely with our partners, and experience it ourselves. Why are we running? Why are we doing Windows Azure? What on earth are we thinking running these big datacenters? Well, one of the things we’ve learned is that unless we run it ourselves, unless we learn it ourselves, we won’t build the products that you need to do the best job. The ability to really drive down this cost comes from changing the software. It’s in the software that allows us to really drive down cost and speed time to market. And by running this ourselves, we get that information.

We know we will always work with partners, we’re going to continue to work with partners, hosters are an important part, and of course we’ll work with you, our customers, to supply you with dedicated, private clouds that you can run within your own organization.

And so what we’re working on right now and working on delivering as we move forward is to take all the learning we have today with Windows Server and System Center, all the work we’re doing with thousands of hosting partners around the world, and the learning that we’re getting from Windows Azure and running this ourselves every day at massive scale, and we’re building one platform across all three of these things, one consistent application model, and of course one management solution based on the System Center product that you know today. That’s where we’re heading into the future with the cloud.

And now what I want to do is talk a little bit about the promise of what the cloud will deliver, and also some of the specifics of how the cloud will change the way all of us do our jobs. The changes are fairly dramatic.

So the first thing that’s interesting to look at with the cloud is the hardware model and what’s going to happen to the hardware going forward. When we look at hardware, we’ve seen the dramatic change over, say, the last 20 years in the way hardware is designed. When I started at Microsoft 22 years ago, business ran on mainframes and mini computers, that’s where business ran. Industry standard hardware changed all of that. And it is the way of providing business critical performance and capabilities at a much, much lower cost, and a much faster time to solution.

You know, we’re really pleased. We’ve been able to work with you, we’re very successful with that. Industry-standard computers, which make up about 95 percent of all servers sold in the world, Windows Server is over 75 percent share on that. So, Windows Server has become the predominant product that’s used by the industry to deliver business systems. And so there’s been a lot of learning from that.

And we’ve watched over time as industry standard computers have evolved from essentially PCs to very, very capable servers. But we realize when you’re running things at massive scale, there’s a different way to look at the hardware. And so I’ll just sort of take you back in time. We’ve been working to build a set of services, products delivered as a service, for about 15 years now. And in the early days, the 1990s and into the early 2000s, when we were delivering our MSN and Hotmail services, we would buy servers, probably similar to what you would do today. We would buy servers from our partners, HP, Dell, a number of our different partners, and we would take those servers out of boxes and screw them into racks and wire them all together.

We kind of came to the conclusion that we’re buying so many servers, that was a pretty inefficient way of doing things. And so about five, seven years ago, we stopped buying servers and we contracted with a number of companies — there were a couple of special purpose companies, but also our key hardware partners. We contracted with them to buy fully pre-assembled racks which had the servers fully pre-installed, fully wired together, and we could wheel these racks into our datacenters and then test them and bring them up much more quickly.

Well, this year, we’ll buy over 100,000 computers. That’s a lot of computers to buy when you’re buying them in racks like that. So, we’ve stopped doing that, by and large, and we’ve shifted to purchasing our servers in the form of containers. A container is effectively a shipping container. It has roughly 2,000 servers on it, more or less, and it can have many, many, many terabytes, up to — almost a petabyte of storage in it.

These containers weigh about 60,000 pounds. They’re wheeled on trucks. What you see here is a picture of our Chicago datacenter. This is actually our real datacenter. Those containers are running both Windows Azure as well as Bing. And when you bring in servers like this in the form of a container, you literally get to the point where the container is delivered, it’s pre-tested by the hardware manufacturer, we plug in power, Ethernet, and water. We actually now know how to plug in room temperature water, not even chilled water to keep it cool. And so we’re able to get a dramatic reduction in the overall hardware cost. So, dramatically less cost with the hardware.

When you’re buying 2,000 servers at a shot, you know, everything matters. You’re looking at every screw, every piece of plastic, anything that’s not compute, memory, storage, networking, that’s superfluous inside a box like this. And so we’re able to work with our partners to drive down cost.

You know, we think that there’s a factor of ten cost reduction in the hardware. Think about that for a second. A factor of ten reduction in terms of what you would pay to get the equivalent amount of power. Now, sure, we’re buying these massive containers, and probably most of you are thinking, hey, I can’t buy 2,000 servers at a shot, but we’re working actively with all of our OEMs and they’re all interested in delivering smaller containers that might have just a hundred or a couple hundred servers within it that could deliver a private cloud or a dedicated cloud to you that, again, gets many of the benefits of this cost reduction.

So the work that we’re doing to build our massive-scale datacenters we’ll apply to what you’re going to be running in your datacenter in the future because Microsoft and the industry will deliver that together.

The next piece on the cloud is the application model. Existing applications can be moved to the cloud, and we will support helping you to move existing applications to the cloud. You can take an application within a virtual machine and move it to the cloud, and you will get the benefits of the hardware cost reduction I described, you’ll get some operational savings as well.

But in order to get the full benefit of what the cloud will deliver in terms of time to market of how fast it is to write applications, there are a set of services that will be delivered as a part of the cloud application platform that will enable applications — existing applications to be modified, and new applications to be written to reduce the time to market and to provide further cost savings.

We’re pioneering all of that with Windows Azure. We see some of that today with our Windows Azure environment where we think about applications that are natively designed to be elastic and scale out. Applications that are designed to be resilient to failures and to be always available. Bad things happen to good servers, and it’s a reality of the world that servers will fail, applications should not, service should continue to be delivered, and the cloud application model will enable applications to always be available.

One of the key attributes of faster time to market in terms of applications is modeling and enabling modeling to be a core part of the way that applications delivered. In fact, thinking about a cloud application without enabling it within a modeling subsystem is very confusing to me because the idea of building applications that are elastically scalable that work across this virtualized datacenter where you really don’t know what hardware that application is running on, the system has defined that, that requires an underlying model to enable it, so all of the work that we’ve been talking about with Dynamic IT applies directly to that. And, in fact, the model is the key part of speeding the time of application delivery.

What you’ll see is the underlying system, the underlying operating system, the application platform, the database, will all become model-driven. We’re on our way to that. We’re taking a big step in SQL Server 2008 R2 about to ship by defining a database, all of the tables and schemas, the views, the storage procedures, even the data in the form of something we call a DAQ model, that’s about to be real. System Center will orchestrate and control those DAQ models. We’re building that into the database.

You’ve seen how we’re driving that forward with System Center and Operations Manager. If you look at the work that we’re doing with AppFabric in the middle tier, that is also all model-driven. In fact, what I have a picture of here is our modeling language, which we code-named M, which is in beta right now. It’s a key part of the underlying environment for developers to build model-based systems. And, in fact, as we move forward into the future with System Center connecting back to that and taking the service-enabled models that are built with System Center and then being able to express that in M code that is modified by developers is all a part of that.

So when we take all of this, we think that there is an opportunity to get a factor of 10, an order of magnitude improvement in the time it takes to deliver an application to market, and that’s a pretty dramatic thing, given the backup that exists within IT.

The third piece of the cloud is the operations model and how we can reduce the cost of operating these systems. You know, when you’re running five servers or 10 servers, you have one model of operation. If you’re running a few hundred, you have another. If you’re trying to run thousands, the idea of having operators doing tasks again and again becomes untenable.

Let me tell you, when you’re running hundreds of thousands of servers, you’ve got to do something different. And here, we’ve learned, again, a tremendous amount from the work that we do. I keep saying, we learn by doing this ourselves, we learn a tremendous amount by the work that we’ve done with Bing and the operational model that’s put in place by Bing. Bing is run, even though it’s a couple hundred thousand servers, it is run by only a small number of operators. And the reason it’s done is the underlying operational system. The cloud system that it was built on top of is designed to enable this scale-out elasticity and to be able to work across many, many servers across multiple datacenters without end user interaction, without operator intervention.

We’re taking that same knowledge that we learned in Bing and we’ve built it into the Windows Azure platform. And we’re taking that same knowledge that we’re building in the Windows Azure platform and we’re enabling it to be managed with System Center.

So in a world where today many of our customers run with a ratio of one person per operation to say 30 servers and a world-class IT organization might have several hundred servers per operator, inside our datacenters, we’re running with several thousand servers per operations person. And in doing so, we’re dramatically reducing the cost of overall operations. Now, what that means is that IT takes on a different and new role. It’s not like these jobs all go away. What happens, instead, is the jobs change to focus on enabling higher degrees of service availability, faster application deployment into production, and really doing more to accelerate and enable the underlying business.

So whenever we have these dramatic shifts, and the cloud is a world-class dramatic shift. Whenever we have a dramatic shift, the role of the people that were running the previous generation changes. It’s not like everything goes away. All of a sudden, business demands more. There’s more to be done, and the key is the underlying operational systems will enable that, and your role, your role, all become the more important, going forward, because you’re taking part in a broader way in the overall business process and doing more to enable the business solution.

So it’s pretty exciting, 10 times reduced cost, a factor of 10 reduction in hardware, a factor of 10 speed up of application delivery, and a factor of 10 reduction in cost. All of that translates into faster time to market for solutions and more that can be done, IT’s ability to support the business more and enable more to be done for your end customers.

Let’s go through some of the details associated with enabling this. One of the key things is how the system all works together to enable applications to be deployed. Now, when you’re deploying — like say take our Bing environment. When we’re deploying thousands of servers, there are different roles within Bing, some of them are search crawlers, some of them are handling queries of the advertising system. There are all sorts of different apps within Bing. But each one of these apps runs on tens, hundreds, or thousands of servers. When you’re deploying applications across those kind of number of servers, you can’t maintain a separate image for every one of these applications that you then separately patch and manage.

When you think about the complexity of that, typically, large enterprise customers have thousands of applications, that means potentially thousands of virtual images that need to be maintained. Just imagine in your head the complexity if you have thousands of images that are each maintained separately across tens, hundreds, or even thousands of servers. It’s sort of mind-boggling, you can’t do it, and that’s not the way we do it with Bing. That’s not the way we do it.

In fact, each one of the applications has a specific image that gets built, and that image, with the application composited on top of it, is deployed into production. In fact, what we’ve done is we’ve said, hey, we don’t need a copy of the operating system for every application. We can take a small number of OS images, not one, but not thousands, maybe two, five, 10, 50, a small number, a single digit or tens, a small number of OS images and manage those, keep those up to date. And then when the applications change, they’re deployed into production together with the underlying OS image and the underlying application is composited on top of that.

So what you have is this layer cake of a physical infrastructure with hardware and hardware virtualization. You have application virtualization creating a logical layer for the applications, and then once again, all of this is controlled and orchestrated by a model, because a model is really at the center of this. And I first put this picture up in 2008 when we talked about how we would be evolving App-V for the server environment to enable you to maintain this small number of app images and deploy them into production and have that compositing happen.

Well, now, as we move forward with System Center, that’s becoming a reality and the next release of our System Center datacenter product will do exactly that. Learning from Azure, learning from Bing, applying it to a datacenter near you. So, with that, let me invite Edwin Yuen up to show you the next version of System Center datacenter in action. (Applause.)

EDWIN YUEN: Great, thanks, Bob.

BOB MUGLIA: Good morning.

EDWIN YUEN: Today, we’re going to give a sneak peek at some new capabilities coming in System Center Server Management suite. Now, abstraction is one of the keys to building clouds, and it really allows you to take the capabilities away from the infrastructure of your systems. But the real business value is being able to deploy your applications and your services right into that infrastructure.

So what we have here is the console for the next version of Virtual Machine Manager. And in Virtual Machine Manager, it will do all the capabilities that you would expect from the existing version of Virtual Machine Manager, but also a lot of new things as we’ll see here in the library.

Now, last year at MMS, we demonstrated the capability to use server application virtualization or server App-V images to compose with virtual machines to build applications. Here, we’re going to be able to see that we not only manage server App-V images, but those SQL DAQ model packages that Bob talked about, and also MS Deploy packages, which will completely configure our IIS system.

BOB MUGLIA: So what we’re seeing is that the underlying subsystem, the platforms, are getting smarter. When we used to think about Windows Server, it ran on one machine. Over time, Windows Server based on Azure technology will run across multiple machines.

One thing we do is we call this fabric, we call this a fabric layer that we’re building in, and we’re thinking about how fabrics interact at each layer of the stack. So, whether it’s the underlying deployment of virtual machines, the IIS layer for the Web server, the middle tier with middle tier components, or SQL Server, all of those things are being built in as fabric layers. On top of all that, of course, System Center will manage all of those things consistently.

EDWIN YUEN: Well, in the existing version of Virtual Machine Manager, what we’ve done is we’re actually managing and using templates to deploy virtual machines for applications, but most applications really span multiple virtual machines, and are really defined at a service level. So, let’s see how Virtual Machine Manager and System Center can help us do that.

So what we’re going to do is we’re going to launch the new service designer feature of Virtual Machine Manager. And this is going to allow us to model, design, and deploy a template with a three-tier application that I’ll call Contoso Three Tier Template. And we’re going to choose a scale-out, three-tier app.

And as you can see, once I’ve done that, the entire skeleton for my three-tier app is there, including a Web tier, the application tier, the database tier, load balancers, networks, and storage.

In the bottom left-hand corner, we see a series of templates. And what these templates are is they consist of the OS configuration with the VHD image, the hardware configuration, and those application packages, them being the IIS, the SQL or even a server App-V package.

In fact, we’re going to go ahead and deploy the Web tier. And if we take a look at the Web tier, we can see we’ve defined that hardware, we’ve defined the VHD that we’re going to use, and we have that MS deploy package, which pre-configures my entire Web server.

And to deploy it in this model, I simply drag, drop, and it’s there. I want to make it a scale-out, so I’m going to go and click scale-out down here. And I’m going to make sure that I get at least — maximum of 10 servers, and at least three running virtual machines to make sure I scale out.

BOB MUGLIA: So this is an example of how System Center and Virtual Machine Manager are really enabling the cloud, the idea of enabling elastic scale-out systems on standardized hardware is exactly what we’re delivering here with Virtual Machine Manager. You see, Virtual Machine Manager will manage the number of instances of the Web tier that’s required. It will always have three. That’s been based on what the Operations Manager said, but it will scale out as required by the load. And that will be determined by whatever parameters are set by you. Maybe it’s based on what’s coming out of the operations management system.

EDWIN YUEN: Right. And we’re going to go ahead and deploy the apps here now. And what we see with the app tier is that this has that Server App-V package. And by using Server App-V, instead of pre-installing the application into a VDH or even running a script to get it installed after deployment, I can dynamically deploy that server application right into this application tier and by abstracting the application from the operating system, I can deploy a number of different applications using a single OS image.

So just like last time, I’ll just drag and drop and I have it. And now I’ll go ahead and create the database tier. And in this configuration, we need a little bit beefier hardware, so we’ll go to four virtual processors, eight gigabytes of RAM, and probably most important, we’re going to go ahead and select high-performance storage to make sure I have the storage IO that I need.

BOB MUGLIA: And this is an example of how Virtual Machine Manager essentially becomes a manager of underlying fabric systems. In this case, what we have with SQL Server is the model of the database is all defined in the DAQ, and that is deployed by Virtual Machine Manager as a part of this application deployment.

EDWIN YUEN: So when I’m all done in the upper-left-hand corner, I can simply press the “deploy” button, and when I do that, it’s going to take the entire service that I’ve modeled and designed here and deploy together all the virtual machines configuring all the storage and all the networking and managing it at a service level.

So to save some deploy time, we’re going to go ahead and take a look at one that I’ve deployed earlier using the same template. And I’ll go to my services. And what I’ll see now is I’ll see that Contoso Web store service that we’ve talked about, that’s what we built off that Contoso three-tier template. And I can open that up and I see now five VMs that I have running, including that minimum three that we wanted at the Web tier, the application tier, and at the data tier.

Deploying and making these services easy to deploy is an important thing, but in the life cycle of an application, we really deploy maybe once or twice, but we’re consistently patching and updating our systems. Now I’ll show you how System Center can make patching a lot easier also.

When I go to the library, earlier we saw we had multiple different services, they all consisted of multiple different virtual machines. But in the end, I created them all using only three OS images, a standard app image, a SQL image, and a Web image.

BOB MUGLIA: And so what happens is that at the application deployment time, System Center will composite the different application services on top of these standardized images for the needs of the application.

EDWIN YUEN: And with those images, what I can do is use a new feature and actually scan for compliance. And it’ll scan against the security baseline that I’ve created. Once I’ve hit the scan, what will happen is that Virtual Machine Manager will coordinate with your WSUS server, look against the security baseline, and completely offline, scan all your different VHDs and check whether or not they need any patching or configurations or updates.

And now that we’re done, we can see that I’ve had two images that are fine that don’t need remediation, but one of my images needs remediation. So, what we can actually do is actually version that VHD and test out the changes and the patches, but for the demo, I’m going to go ahead and remediate the image. And, again, offline, what we’re doing now is we’re patching that OS image, that VHD against that security baseline making it a good, known configuration. And since Virtual Machine Manager is managing all the different services, templates, virtual machines and service-level configurations using this template, it’s going to automatically know everything that meets these updates based on this patched image.

BOB MUGLIA: So think about how this changes the way you operate and run your system. Now when OS images — or other application images need to be maintained and updated, what we can do is we can scan for those offline, understand exactly which images are affected, apply those patches offline, and then enable you to orchestrate the deployment of the updated images and the updated applications into production at the point that makes sense for you. There’s no need to patch the underlying running system.

EDWIN YUEN: Correct. And then if we take a look now, we see that we have multiple different services affected by the template. And if we look at the one we built earlier, we have one tier, basically the app tier with the server App-V package that meets that update.

So we can go down here in the bottom right-hand-corner, and there’s a button that just says “update service.” Now, we could go ahead and test this in a test environment, but for the demo, we’ll go ahead and update the service. And what’s happening here is we’re actually going to go ahead and update that service. And what Virtual Machine Manager does is it goes to that virtual machine and it abstracts or pulls off that server App-V image, including the entire application state, removes the old OS image, places the new, patched, good, known image right in there, personalizes that for the application, and then reapplies that server App-V image onto that image itself.

What that results in is that now by updating a single OS image, we can go ahead and actually update and patch hundreds, if not thousands of applications with the single click of a button.

BOB MUGLIA: The implication of this is that there’s a great deal more consistency to your production systems. It’s much more of a known environment, and it frankly scales a lot better. The idea of applying patches to 200,000 Bing servers is just not in the cards, that’s not how we do it. We don’t patch ongoing live systems in Bing. Instead, what we do is we create — is we apply the patches to new images, and composite the appropriate Bing application image on top of that, and then orchestrate the deployment of that into production. And in the case of Bing, we have to stay always available, so we stage that through multiple segments of our datacenter to ensure consistency and availability of the application as we’re making the underlying application changes. That, by the way, is exactly the same thing we’re doing with Windows Azure, and we’re taking that same technology and enabling you to apply it within your own datacenter environment for System Center.

EDWIN YUEN: Great, thank you.

BOB MUGLIA: Great, thanks a lot, Edwin. (Applause.)

So this is a key part of the path to taking your existing environment forward into the cloud. We very much want to enable you to have a transition where you can leverage what you’re doing today, leverage the knowledge that you have, and then begin to be able to take advantage of these great new capabilities, that cloud-style application deployment will provide. And, again, whether that’s within your own datacenter, or a public datacenter or whether it’s in a hosted datacenter, we’ll think about that in a very cohesive way.

Now, when we think about the gap from, say, the private cloud and your own private datacenter to a public cloud, there’s a lot to think about here. Sometimes, when people talk about private and public cloud and think about what it means to connect between those, they talk about moving VMs between a private cloud and a public cloud, and you certainly need to do that. That’s certainly an important part of what is required, to be able to take existing OS images, existing application images, and move them up into a public cloud, a shared public cloud environment. But that’s just such a tiny part of overall problem. When you think about the potential of utilizing shared resources that exist either in hosted or Microsoft-run datacenters, there’s a lot of amazing potential that can come for applications that only run once in a while and don’t need to have any kind of dedication of hardware, or for incremental capacity requirements, or to utilize some spare capacity that might exist inside those systems at a lower price. Those are all great advantages. Also, services will be delivered by vendors to you that are running in the cloud, and you’ll need to integrate those into your overall operational environment.

So when you think about it really cohesively like that, moving a VM is only a tiny part of what it takes to move from a private cloud to a public cloud. You can’t get there with infrastructure alone, it’s insufficient. You have to think about the entire IT environment, the entire service management environment very holistically, and that very much includes thinking about the application.

So think about some of the other problems, identity and managing identity of your users across the different clouds that you’re working on, your own private cloud as well as some of the public cloud providers. How do you federate that identity in a cohesive way? You have to solve that at the infrastructure VM layer, that doesn’t work. How do you think about your data? OK, you want to move data up into the public cloud, is that data secure? Well, what does it take to provide that? The data exists in files, it exists in databases, how is that data going to be encrypted and those keys known only by you to ensure the security and compliance of your system? Again, that’s a higher level set of services that have to be provided above the virtualization layer.

I talk about the app model, about apps evolving and application subsystems evolving to enable scale-out services to become a native part of that, and models to be driving all of that. Again, that’s a higher-level set of services.

And then finally, there’s how services are managed across all of this, how do you deliver your SLAs and orchestrate and manage the services across your organization, some of which will now exist within your own datacenter, some will connect into these public cloud environments. How does that get managed and what tools do you use to manage those? Are those new tools that you’re not familiar with or are they the same tools that you’ve been working with for a long time to manage your datacenter?

So, this gap between the private and the public cloud is very real, and it’s not something that can be solved at the virtualization layer alone, it is something that requires a full out stack to think about, it requires a lot of coordination across the industry, and of course it requires a lot of input from you.

We’re working hard to bridge that gap, and what I’d like to do is show you a very key step in that, and invite Shilpa Ranganathan to show us how System Center can monitor between your own private datacenter, your own private cloud, and a public cloud environment, in this case Windows Azure. Shilpa? (Applause.)

SHILPA RANGANATHAN: Thanks, Bob.

BOB MUGLIA: Good morning.

SHILPA RANGANATHAN: Good morning.

Good morning. Today, you are going to see how we’ve taken our first steps toward monitoring and managing your applications as they transition to Windows Azure.

You will be able to extend your existing investments in System Center to seamlessly monitor your applications, be it in the cloud or on-premises, while still taking advantage of all the great benefits that Bob talked about with Windows Azure.

As an application owner, what is really important for me is the ability to see my familiar applications in a familiar view, and that familiar view is provided today by System Center Operations Manager.

I’m in the Operations Manager console today. I’m going to bring up a view of my application, which is the Contoso electronic Web store application.

With the help of Management Pack, Operations Manager has discovered all the components of this application.

Let’s take a look at what these components are. To the left of the screen you can see a couple of databases, and both of these are on-premises. Right in the middle of the screen I have two points of presence, both running within my datacenter, one in Chicago, one in Las Vegas, and both of these simulate end users accessing my application. And to the far right, Bob, is the coolest part of this application, which is my Web front-end, entirely hosted on Windows Azure.

BOB MUGLIA: So, System Center here is providing a service-level view of the application, and in this case that service-level view, that model that’s here is spanning from an on-premises cloud, a private cloud on-premises, all the way up to a public cloud, in this case Windows Azure.

SHILPA RANGANATHAN: That’s correct.

Even though a critical component of my application is hosted in Windows Azure, my job as an application owner doesn’t change. I’m still responsible and accountable to meet the SLAs on this application.

And, Bob, like we said before, the business is doing really well, and we have a lot of customers requesting for this application. So, there’s a lot of load being placed on the application.

And clearly from the screen below you can see that there are a bunch of warning signs that Operations Manager is indicating to me that I might be impacting my SLAs.

Operations Manager is helping me in two different ways here. By watching my application, we have synthetic transactions that I have constructed on-premises, and these synthetic transactions run from my points of presence that we talked about.

And the second way, with the help of Management Pack we are leveraging Windows Azure instrumentation to be able to give you a deeper insight into the Web front-end that’s hosted in Windows Azure.

So, let’s take a deeper look here. We’re going to open the performance view in Operations Manager, and, Bob, here again this is a familiar view that I’m used to and this is no different from what I would have done in Operations Manager had the app been on-premises.

BOB MUGLIA: But here the difference is Operations Manager is connected to the underlying monitoring services that are native within Windows Azure, and this performance information is actually coming out of a Management Pack that is available for System Center or shortly will be available for System Center Operations Manager that connects to Windows Azure.

Our expectation is that we will ship that Management Pack later this year sometime.

SHILPA RANGANATHAN: That’s correct.

So, clearly there are two counters here that have spiked, requests per second and processor utilization.

So, the next question would be, what would I do to remediate this?

So, let me take you back to this screen here. If this application was completely on-premises, what I would have done would be to add additional Web servers in my datacenter. But given that my Web front-end is hosted in Azure, I can add additional Web roles in Azure as well.

But what I really want to do is make this really simple and easy for application owners to be able to do this from within Operations Manager.

So, I’m going to click on a task here, and, Bob, this task again is provided by the Management Pack, as you talked about, and add a couple more instances of the Web role, run the task, and now we have two new Web role instances that have been added here.

Once these get operationalized and come online in Windows Azure, all four instances will share the load on the application, and my application will be healthy again.

BOB MUGLIA: One of the core attributes of Windows Azure is the ability to scale applications in an elastic way, and bring up new virtual machine instances, and it has a set of APIs in order to do that.

And what we’ve done with System Center and this Management Pack is with Operations Manager we’re connecting into the native capabilities within Windows Azure to both, as we showed earlier, monitor and understand the performance characteristics of the application, and in this case remediate the issues by adding incremental roles, incremental virtual machines into production.

SHILPA RANGANATHAN: That’s correct.

BOB MUGLIA: Great. Thanks a lot, Shilpa.

SHILPA RANGANATHAN: Thank you, Bob. (Applause.)

BOB MUGLIA: So, thinking about that gap, thinking about the gap between private and public clouds, our focus is to enable you to manage that gap, and to bridge today’s datacenter and private cloud environment into the future and the things that public clouds will be providing.

We don’t expect that you’ll move entirely to public cloud. We know that you’ll take your existing investments and begin to think about how you would deploy private clouds, taking advantage of a number of the sets of capabilities that we’ve shown today, but there are a number of cases where for some applications the usage of a shared model is very interesting.

I talked about the idea of shared and dedicated clouds, both within your own datacenter, within hosters, and within Microsoft, and we will provide both. We’ll provide both a shared cloud environment, and in some cases a dedicated environment.

And I mentioned that government would be an example of a dedicated environment that we’d provide, but we actually have talked to governments in some cases, and they’re interested in the shared cloud environment that Windows Azure provides. So, it’s really a combination of all of these things.

And to be able to take and leverage the knowledge, the training, the systems, all of the things you have, the applications, all of those things and carry them into the future, that’s true at the infrastructure layer, but it’s also true at all of the steps on top of that, the underlying platform, as well as very importantly the applications.

And our goal is to enable you to manage the gap from here to there, and we will do that step by step, just like we did with Dynamic IT, step by step. We’ll do that by working with our partners and by learning from you.

And when we think about this, we think about it very holistically. We think about the needs of IT from an overall perspective.

You know, we start with the business, and understanding what you’re doing within your organization to deliver the services that your customers demand, whether they’re business units within your organization or end customers, how can you deliver what you need as effectively as you possibly can. And the technology that is coming will really do an amazing set of things to drive down the cost and reduce the time to market.

But to put all this together you have to think holistically, you absolutely have to think at the infrastructure level and the virtualization layer, but you also have to go above that and think also about identity, the underlying platform and what can be provided with things like AppFabric for the middle tier services or the Web server, what can be provided in the database, how does the database evolve.

Think about today — let me give you an example. Running SQL Server today is one of the more expensive parts of every IT organization. Well, with the Windows Azure platform and SQL Azure, we’re running a single fabric-based system, SQL Azure system across thousands of servers, spanning six datacenters worldwide, and provisioning of new databases is as simple as clicking on a Web page.

Think about how that changes and all of these layers need to be considered as we think about how we help you to move forward into the cloud of the future.

And our goal, simply put, is to take all that learning that we’re enabling and building within our own public cloud environment, and provide that to both our hosting partners and to you within your own datacenter, your own private cloud environment, so that you can take and evolve your datacenter of today into the cloud of the future tomorrow.

So, I’ve talked about a journey here. This talk today, in fact in some ways this conference is all about a journey, a journey that we have been on together. I think that’s a very important part.

If you’re new to this journey, if you’re just coming onboard now, welcome. If you have been with us for some time, say through Dynamic IT or even further into the past, into the dark early days of SMS, thank you — thank you for all you’ve done to help us through this journey. It’s a big part. And if you’re one of our partners, it’s a key part of how the solutions are delivered to all of our collective customers.

So, we have indeed been on a journey together, and that journey has taken us to a point where we can see the fruition of the vision of Dynamic IT. That was a vision once, it is becoming reality today. And with that reality we can look forward into an even brighter future that the cloud will deliver, because the cloud really is a major transformation in the way IT will be delivered as a service.

We could not be more excited about that. It seems like it was just yesterday — it seems like I’ve been to a number of these MMSs, this is fourth or fifth conference I’ve talked at — it seems like it was just yesterday that I first started talking about DSI and Dynamic IT, but time flies quickly. What I can tell you for sure is that it doesn’t slow down as we move forward. There’s a lot of great technology being developed, a lot of great things that will happen. And remember, everything moves faster in the cloud.

Thank you very much. Have a great MMS. (Applause.)

END