Remarks by Bob Kelly, corporate vice president for Infrastructure Service Marketing, on the Impact of Cloud Computing on Business Information Technology
Microsoft Management Summit
Las Vegas, Nev.
April 28, 2009
BOB KELLY: (Applause, music.) Wow, it is great to be here. The city of lights with the best damn customers on the planet. Thank you so much for coming today, coming for this week. This is a huge event for Microsoft. And, in fact, it’s a huge event for the industry. We really have done a lot of great things together over the last 10 years as we’ve built MMS into being a powerhouse in the industry.
Myself, I’ve been at the company for 13 years. I started as a server guy and I’m still a server guy. I started as the NT product manager as the Novell competitive product manager. And guess who was on the opposite side of me? Brad Anderson. Where are you, buddy? There you go. Thank God we got him for Microsoft now and he’s helping us build a great business together on the management side of the house.
So while working at Microsoft, I’ve actually done a ton of different roles. Roles in the BG or the business group, roles out in the field helping customers and the sales force understand value propositions and how to land our product. And about seven or eight years ago, I made a fundamental decision that I needed to come back to the business group where we drive corporate strategy so that we could start to invest more across the stack, deep into the stack and across the fabric to ensure a well-managed, reliable, securable, available platform for enterprise businesses. And that’s what I do now.
I now run product management marketing and strategy for the Windows Server business, for our System Center business, for ForeFront. I build all the solution accelerators that you all use as well. So really we’re thinking end to end across everything that we do to ensure that you have all the tools that you need to run your businesses.
And it’s places like MMS that really get me excited about what I do. The reason is I get to come here and hear from customers who are extremely passionate about the technology, who are extremely transparent about their feedback to us about what’s good, what’s not good, what we can do to better enable you to do your job. I love that. And it’s the reason I get up every day and come to work from Microsoft. So thank you very much for what you do for me.
So now it wouldn’t be MMS if I didn’t actually have some amazing statistics to talk to you about. There’s a lot of thank-yous that I have to give, but before I give, that, I’ve got to give you a little story.
About, I don’t know, six, eight weeks ago, I was about to do a little speech in front of the Microsoft MVPs. And I was prepping for the speech and it’s a really important speech. The MVPs are a great audience for me and I really wanted to get it right and I was stressing about this speech. And my 15-year-old daughter says, “So, Dad, what’s up? How come you’re worried, what’s going on?” Well, it’s a really important speech, I don’t want to screw it up. She said, “Well, Dad, I’ve got some advice for you: Don’t mess it up.” Great, well, that’s my goal today too. I’m not going to mess it up. OK?
So in that speech, though, I noticed that there were a tremendous number of System Center MVPs at the event. Somewhere around 40 or 45 of them. And so I put a photo up of the System Center MVPs and the crowd went nuts. That’s one of those things that I love about the feeling of an event like this. You all know each other. There’s tremendous networking that occurs here. And it’s because of that and because of these networks and community that you build for yourself that you’ve got tremendous credibility with Microsoft. And you’ve built that credibility with our System Center team.
I have had an engineer on the System Center team tell me before we even write a line of code, we check with the MVPs. That’s a profound impact that you’re having on Microsoft. So thank you very much, it’s a very special relationship that we have together.
Now, as Rodney was pointing out earlier, this is a very important year for MMS. It’s the 10th year of MMS, and the 10th year since SMS 2.0. Yeah, yeah, I hear the rumbles. We all have those scars, myself included. And yet, it was a very important time for Microsoft because that was the time when we really began to take systems management seriously.
We had made some mistakes and we learned from those mistakes through deep dialogue with you to really focus on the customer problem and internalizing what that would mean.
Now, part of the way that we got there was by really investing in an event like this. So ten years ago, we had 80 people at the SMS user conference meeting. Today, we’ve got about 3,000 people sitting here in this room. So at that time, we also had probably one or two sort of real insiders, the real passionate people. Today, over 80 percent of the attendees are alums. That’s stunning.
There are a few people, however, it’s imperative that I call out because they’ve been a presence and a force at MMS since the very beginning. Rod Trent. Rod, are you out there? (Applause.) Rod started the My IT forum site, a very, very popular community, been here every year.
Paul Thompson, MSIT, are you out there, Paul? (Applause.) Paul’s from MSIT and he not only attends, he delivers sessions every year. And by the way, he’s probably the only person in the history of the planet who has a poster of SMS over his bed. That’s a little weird. Now, if there’s another one of you out there, I want to meet you. OK?
April Cook. April’s been here every year as well. Unfortunately, due to the circumstances of the economy, she couldn’t travel to make it this year.
And Brady Richardson, who has been a very powerful force in the System Center business and at MMS. Many of you know that Brady’s quite ill and he couldn’t make it this year. So I’m sure he’d love to hear from you. Send a note to him on Facebook, I’m sure he’ll need it.
So with that sort of as the backdrop, there are sort of a couple things I wanted to do for you. One, we’re going to introduce this notion of the cloud. OK? And you’re going to see this thing in the background there, and you’re going to say, “Wow, the cloud. What’s that all about?” There’s a whole bunch of hype there, is this another cloud rush conversation? No.
What I’m going to want to do for you is help you understand what we mean by that cloud. Over the course of the last ten years, there have been a number of very important trends that have occurred that really are starting to come together in a way that we haven’t seen come together to this point in time.
And so what I want to do is actually help you understand what the cloud means to Microsoft, what we think the cloud means to you as customers. And I’ll be the first to admit that Microsoft and many in the tech industry haven’t done a really good job of showing you why this cool technology has real business impact. And that’s my goal today is to help stream that for you.
We’ve been fortunate. Microsoft has developed a vision that we call Dynamic IT. And that Dynamic IT vision has been sort of infusing itself into everything that we’ve done across our infrastructure stack and our application stack. In fact, since we have such a broad number of alums here, you know that Kirill and Bob (Muglia) and Brad, et cetera, have been talking about DSI and Dynamic IT for a number of years. These are long-term investments that really have landed themselves in what we’ve talked about in terms of virtualization or process-led and model-driven server orientation and user centricity.
And it’s those technical investments that will really enable us to drive forward and drive the usage of cloud platform and cloud computing deeply into the IT industry. And not too long after we announced the IT and we really started to talk about IT, we had a number of our friends on the engineering side who spun up this little piece of work that we now call Windows Azure.
Windows Azure is that foundation for that cloud – that public cloud, the stuff that customers will consume from the cloud itself. And so that’s a very important part of how we think about what we’re going to do in terms of forming the cloud and forming the IT strategy going forward.
So what do we mean when we say “cloud”? A cloud is a platform delivered as a service. It’s optimized for cost and performance, massive scale up or out, and reuse. That’s what we mean when we say cloud. There are lots of different ways people talk about cloud, but I think you really just need to lock in your head those terms. Clouds are reliable, highly available systems that are built for redundancy.
You know, a classic analogy, right, the airplane. It’s got four engines, but it can fly on three. That’s an important kind of redundancy. And when you think about clouds, that level of reliability is a critical element of how we have to think about building IT as a service. We have to have that level of reliability.
So today, for example, we have hundreds of millions of users in our Hotmail infrastructure. If a server goes toes up, we don’t send an IT admin in immediately, we just move that work to another server. And then once a month, an IT admin will go in and look for the red blinking lights and replace that hardware. Massively reliable, absolutely critical, the first element.
The second is predictable. Things just have to happen when they should. Today in IT, things happen in windows of time. If you have a problem with a server or a client and you’ve got to go patch that system, you get a window of time where you’ve got to shut the machine down or move that work through virtualization and then go patch the system.
Well, to have predictability, you need to be able to have this system that is actually self-healing and starts to actually take on the attributes of being an end-to-end service that is predictable and predicted.
Finally, automated. For cost, obviously, and reliability reasons. The automation of the cloud means that we can start to take huge leaps forward in how many people per server we have to employ. What the work is that they’re doing. It’s absolutely critical that we move forward on automation so that we can actually enable the system to start down this path of what we’ve been talking about for years, which is self-healing IT.
We remove the risks of errors by having automation. It’s hugely important that we do all that work up front to enable a knowledge-based, model-oriented, policy-driven piece of infrastructure so that we can have this level of automation. And with the cloud, then, apps can be deployed, changes can be made, remediation gets done in minutes, not days or hours. That’s really, really critical.
There’s a clear potential, then, for the cloud to deliver dramatic improvements in IT efficiency. But it also has a long-term implication for your IT strategy. So when you think about the cloud, there are a couple things you have to think about. What really changes in your mind for your business? One, because you’re shifting from (capital expenses to operating expenses) you have better control of costs. That’s a very important element of why customers are starting to adopt critical technologies like virtualization today. And as we move more and more into cloud, you shift your cost model from cap ex to op ex and that, therefore, allows you to expand and contract your business as necessary.
That second ability to scale up or out without building for maximum capacity, what we call this elastic quality, is the second critical element of what changes for your business.
And then the third is, of course, now IT really is, in fact, aligned to the needs of the business. From a security standpoint, from a compliance standpoint, from a service-level delivery standpoint, a cloud’s infrastructure, cloud computing, can now set up an environment that allows you to immediately respond to business, but ensure cost containments and ensure that you’re complying with regulation or your own internal policies, et cetera. Very, very, very important transition.
At Microsoft, we began thinking about this cloud years and years ago. And as we started to think about it, one of the things we always thought about was, well, what is this? What is it in the context, if you will, of all of IT? I mean, those of us who have been around IT for a long time know that there have been evolutions before. And so we did sort of this little view of the world. What’s been this evolution?
Those who have been here know that, you know, 10, 15 years ago, the mainframe was really that critical piece of infrastructure. It was absolutely critical to deliver against the business. Provided both data processing, handled very, very, very high volumes of IO, widespread in core high-end businesses, really, really critical. But it also introduced some very important technologies that we’re starting to take advantage of today: virtualization, time sharing, fault-tolerant computing, really having absolute reliability.
And then what happened? Well, this huge transition occurred called client-server. Client-server computing, what that did was with the commoditization that occurred with X86 hardware and with the power of software that could become pervasive at a low cost, you now saw IT adopt technology in a much more dramatic fashion. Right? And what that did is it expanded the footprint of the use of IT not just in large enterprises, but across all sizes of the enterprise, or all sides of the business.
The third evolution: the Web. The Web really was the sort of first (instance), if you will, of distributed applications. Now we had standards-based distributed applications and it meant the adoption of that technology went through the roof. And now we have pervasive applications that are Web- or service-oriented as a part of our IT infrastructures.
And today, we’re at the early stages of the cloud. We are seeing the emergence of some massive, massive global data centers to provide service to end customers anywhere in the world. Microsoft has three global data centers today. We have one in Washington state, we have one in Dublin, and we have one in Singapore. And these have hundreds of thousands of servers in them to deliver service capabilities to end points around the world.
And in that context, in this shift to cloud-based computing, we see fundamentally two models coming out of this world: We see the first emergence as a public cloud. This is where we think there will be very few companies who will actually have the ability to deliver the data centers and the technology and the capacity to run a cloud for external companies.
Now, there still will be a very broad ecosystem of hosters who will be able to provide this capability to customers either for geo or domain-specific reasons. And those will be fantastic partners for us as we begin to federate out this public cloud. But in the end, there will only be two or three or four very large public clouds that will be delivering the end to end platform capabilities that customers can build IT on. And, certainly, Microsoft will be one of those with our Windows Azure platform.
Second is that private cloud. We believe fundamentally that business enterprise customers will demand the same levels of reliability, predictability, and automation as that public cloud provides for their internal data centers. So we call that the private cloud.
Now, it’s very important to not that you can start to conceptualize what this means when you start thinking about things like virtualization and the type of management we’re doing with System Center, which is broad and deep across the stack. Actually, as we think about dynamic infrastructure, getting more automation in the system.
We have to shield the complexities of the infrastructure from your applications. And this notion of elasticity, the ability to scale up and scale down at the point of demand, the spikiness of IT has led IT to deliver max capacity on premise. We think with these elements of the cloud we can help you deliver max capacity on demand. So there’s sort of this notion of elasticity.
So we’ve been investing across this portfolio everything from Windows to ForeFront to System Center across this optimized infrastructure, the tools you use every day, we’ve been investing across the application platform with SQL, with BizTalk, with .NET, with SharePoint – all wrapped together within a development environment that will have consistent and coherent APIs in the private cloud to the public cloud.
Windows Azure is that implementation of the Microsoft stack in the public cloud. So we’ve taken all those investments that we’ve put in the one-premises world, we’ve now taken those to the public cloud and then the learnings that we have in the public cloud we’ll actually bring to the private cloud. That is a unique position that we hold.
There are very, very few vendors in the industry who have the capacity, the tool set, the perseverance to now actually give you a platform that is deep and broad from private cloud to public cloud. That is a unique position that Microsoft holds and it’s a unique position in large part because of the kinds of partnerships we’ve had with customers like yourselves who keep telling us every day we have to do better.
So when you think about this, you really need to think about the private cloud as your internal data center optimized for performance and cost, and with the compatibility with your existing applications all built on the tools you know today, Windows Server and System Center. Very, very, important for us to go forward on it like that.
So then what happens in this world when we start thinking about the implications of cloud investments in the private and in the public cloud? So we’re going to extend all of our investments from the learnings that we have on the public side to what we’re doing on the private side to ensure that you have a platform that spans those two worlds that allows you to take advantage of things like Active Directory, the security infrastructure, and our management infrastructure across both of that public and private cloud. Built, again, on Windows Server and on System Center.
So in a two-cloud world, what happens to your applications? We believe that customers will fundamentally not standardize on one cloud. We think it’s not going to happen. we think it will be very critical to be able to deploy applications on premise in the private cloud and applications – consume applications from that public cloud and be able to span or federate applications across those two clouds. Applications with infrequent scale, for example, are great candidates for that public cloud because they’re sort of on demand, if you will, when I need that elasticity or that spike up. So rather than invest in all that hardware and bandwidth, you use that system that’s designed for spikes and elasticity, that’s that public cloud.
So think of an example of an insurance company that runs actuarial numbers, say, four times a year. Today, they would have to build all that infrastructure for those four executions. Instead, tomorrow they can run IT the way that they do today, and when they need that capacity, they consume that capacity right within their existing tool sets from the public cloud.
And for existing applications, having the flexibility and the ability to take advantage of that elasticity and the dynamic nature of the public cloud with no code changes. That is really, absolutely very critical.
And we will have the tools for you to develop this new breed of applications that spans that public and private cloud. You can still use the apps you have today, but we’ll offer new ways for you to integrate that into the hosting partners and into Azure. Really, really, important. And this ability, therefore for customers to consumer IT off premise in the public cloud or a combination is critical. And the only way to get there is to go with a provider of software and a platform that offers broad and deep capabilities with a single management environment that allows you to consume all those things as though they were one integrated world.
That is what we think about when we talk about Dynamic IT. All of these investments have been predicated on this notion of building a coherent system across the data center, applications, and a user-centric world that is built on these common technology sets, unified and virtualized. We’ve been working on that for a long time. I’d say, check. We now have the assets to really help you virtualize end to end across your infrastructure.
Process-led and model drive, what is this really? This is really about thinking through the entire application life cycle and management of that application life cycle, capturing that in models, and then driving through policy and enforcing through policy.
Third area of investment: Service enablement. On-premise, off-premise, everything that we have done, all of the technologies that we’ve been building on have been about the shift from thinking about an application or a piece of infrastructure as a single box to thinking about the end-to-end experience for the end user, delivering that service all the way to the end.
And finally, on user centricity, Brad will spend quite a bit of time tomorrow really talking about our investments in user centricity and how we think about System Center as a way to ensure a well-managed experience from the end point to the end point for that user. So it’s really, really important. This investment that we’ve made in Dynamic IT allows you to apply this stuff to your data center today with the knowledge and the road maps that we’re taking these investments to the cloud as well.
So what I’d like to do now is actually pause in talking and have you see a customer video of someone who’s already using these concepts of private cloud and dynamic IT to run their business today. So let’s roll the video, please.
(Video segment.)
BOB KELLY: So what’s great about that video is that you see that IT really is acutely aware of what the business needs are. In fact, how many of you are just like me, you know, you get that feeling every once in a while when you’re on the IT side that, I hope this thing goes right, with your sales people or some exec who needs something.
Thinking about that way from the beginning, thinking about what does that user really need, how do I make sure that IT can deliver at the point of demand, that’s really the critical thing that you see that CDW has built it around.
And the sales team understands, therefore, that they can be confident on IT. They can think of IT as a strategic asset for their business rather than something that they just have to deal with. And so using System Center and Windows Server, CDW’s been able to run this truly dynamic IT infrastructure, one that is available, responsive, automated, elastic – it really enables them to deliver against their business goals in a way that they were not able to before.
OK, so with that as context, let’s drill in a little bit further about this private cloud. What are we really talking about when we talk about the private cloud? Since most organizations today have the majority of their IT on premise, I want to focus on this rather than on the public cloud. You hear more and more from Microsoft and the public cloud at PDC and events like that.
There’s tons and tons of hype in this space. And you’ll see stories from other vendors about how this is a massive revolution of IT. Well, you know, really I don’t want to throw cold water on anybody. While this is an important trend, it is not a massive revolution of IT. If you’ve been thinking about these things like Microsoft has around Dynamic IT and around how we can create these abstraction layers, then the reality is you start to se that this has been a place that’s been coming for a while. And, in fact, if you’re virtualizing your environment today, you’re closer to some of these fabric qualities or these private cloud qualities than you might think.
So what’s the first step to a private cloud? It begins with an infrastructure fabric. The compute resources, storage, everything through to the network, whatever composes your infrastructure, the hardware and the operating system is that fabric. And with a cloud, you start to do three things: One, you abstract the hardware. That’s what I meant earlier when I say if you’re on the road to virtualization, you’re on the road to a private cloud. The notion of abstracting the hardware is the first step of being able to think about logical resources as opposed to physical resources.
That second element is logical pooling of computers. Logical pooling of those assets. We’ve delivered this today with Virtual Machine Manager, where you can connect that compute power from your servers into a single logical resource.
The third investment is in the automated provisioning of those resources. And we delivered that today with tools like Virtual Machine Manager and that intelligence placement feature that the system starts to take and look at the attributes of that pool and say this is the best place to put this piece of work. Very important pieces of work that we’ve done to invest in the foundational elements of a fabric for a private cloud. But it doesn’t end there, clearly.
The next release of our important technology that will really go to the next level of enabling that private cloud fabric is Windows Server 2008 R2. Having grown up in the NT business, I have a particular affinity and passion and love for the Windows Server product. I really do love this product. It’s in beta now, and in fact we’ll put out a release candidate in short order, and it will ship with the Windows 7 client.
In R2, our fabric capabilities become even stronger. The first, of course, is this focus on enhancing our virtualization capabilities. With Hyper-V now, we have the ability to address 32 logical processors on the host machine. That’s massive scalability for virtualization. We have of course increased the availabilities with live migration, a feature that we all know we’ve wanted in this product for a while now. And then with remote desktop services, we’ve expanded terminal services to add capabilities for that end user to deliver rich VDI-like experiences and traditional session-based experiences.
This is a really powerful way of thinking about how do I start to create desktop or user experiences as a service. And we start to think about that as pervasive across that stack. So that’s the first set of investments.
The second set of investments really are around the platform, streamlining the Web platform. Windows Server and IIS really provide that foundation for the Microsoft Web platform that’s optimized to give you massive performance and price advantages over the competition. IIS 7.5 builds on the great work that was done in 7.0 and really focuses on integrating new IIS extensions.
IIS extensions allow you to sort of deploy applications, scale out your Web farm, deliver rich, streaming media, do this very quickly, very rapidly. And part of our model on the R&D side has changed where instead of waiting for monolithic releases of IIS to contribute this functionality, we have a constant R&D cycle where we’re releasing these IIS extensions on a regular basis. You actually can start to take advantage of those much more rapidly.
One of the coolest features of Windows Server 2008 R2 is .NET on server core. Finally. That is so cool to have the ability to actually run ASP.NET and PHP applications on server core and administer that through either IIS manager or power shell. The headless server running a .NET application.
Expect also later in this week to see some very compelling cost-savings data running .NET and Web Sphere on Windows Server 2008 R2.
The third area of investment that’s very important is in these IT-driven policies and management. I can guarantee you that Windows Server 2008 R2 will be the most power-aware OS in the industry. We have focused a tremendous amount of effort on power management. Turning down cores when they’re not being utilized or parking them. Pro power profiling and being able to actually use a policy to set and lock the power limits on the server. Using GP to actually enforce that.
Very, very important advancement, particularly in today’s economy. This is a set of features that will really come home over the course of the next six to 12 months.
Power Shell 2.0. Power Shell 2.0 supports advanced remote execution and availability on server core. We have hundreds and hundreds and hundreds of commandlets now, and this is in fact one of the most vibrant communities in Microsoft and in the industry. It’s actually up there, ironically, the site, the blog is up there with Office. The Power Shell blog, love it.
OK. The final area, server configuration. The last area of investment is the ability to actually analyze your server configuration with best practice analyzers and be able to then enforce how that system is actually doing against those best practices.
From my estimation, I look at this and I think, well, that’s not bad for a little product, little R2. It’s actually a doggone good product and it’s going to deliver a tremendous amount of business value to you and to your business.
At the same time, or roughly in the same time, we will also ship System Center Virtual Machine Manager R2. Roughly 60 days or so after the release of Windows Server. This is a very important product because this is how you, again, get that one tool set for managing your IT environment, physical and virtual. And we focused on similar feature enhancements of Windows Server to enable that data center fabric. Focusing on live migration and the automation that that provides, integrating deeply pro management pack. Those are really cool technologies, and you’ll see that in the demo in a moment.
Maintenance mode, VM migrations that you can actually take down VMs while you’re doing a whole bunch of maintenance on the host itself, moving all of that around without having to shut down systems. These are really critical things.
The enhanced storage and cluster support which enables the cluster-shared volumes technology. Really deep investment in understanding how we can make the virtualized and physical environment start to present itself as a private cloud fabric.
OK, this is one where I get to stop now and show you some product. So rather than me just talk about why this stuff is so cool, I’d like to have Edwin Yuen come on. Edwin’s on my product management team and he’s going to come out and give us a great demonstration of VMM and live migration. Welcome, Edwin. (Applause.)
EDWIN YUEN: Hi, Bob.
BOB KELLY: Knock ’em dead.
EDWIN YUEN: Thanks. So what we want to demonstrate today is how System Center helps you manage your private cloud fabric.
Now, your private cloud fabric is made up of a variety of different resources. There’s going to be data centers with physical and virtual servers, networks, storage, and applications. And as we actually go and broaden the amount of those resources we’re managing, proper centralized management of those systems becomes more and more business critical.
Now, that’s where System Center comes in. What System Center does is it’s going to go ahead and aggregate all those resources, their status and information into a system-consistent toolset so you can analyze and optimize that private cloud fabric.
So let’s take a look at an example of how we potentially can go ahead and do that. So what we’re going to go ahead and see is we’re going to see a static Visio diagram here. Now, I know many of you out there are using Visio to kind of lay out your physical data centers. But one of the basic issues that is pressing is how do I know what’s going on in those servers and the rack and the data centers when I’ve got this physical layout?
Now, traditionally, that’s been a difficult thing to do. But now there’s a new add-in for Visio that links the Vision objects right into System Center data. So if I go ahead and do that, now that static diagram literally has come alive. It’s full of information. It’s full of information, and it is live. Once the links are built, all the status information from System Center is brought right into this diagram and now you have a living diagram not only of your physical data center, but the status of the systems.
And if we go ahead and actually zoom right in, you can see that one of our areas is marked as red. And what I’ll do is I’ll click on that and we’ll see that one of our racks is marked as red. And if we click on that, what we’ll actually see is we’ll see the actual server rack, all the server names, and most important we’re going to go ahead and see the actual status that we normally see in operations manager, but we’re going to see it right inside Visio.
What this allows us to do is very quickly understand not only do we have systems having problems, but where they’re physically located so we can rapidly troubleshoot that issue.
Now, System Center is much more than extending views into the current data center or into Visio. What we’ve done is we’ve worked with the Windows Server 2008 R2 team to extend power management into System Center. Now, Windows Server 2008 R2 now has advanced instrumentation that allows us to see power consumption and power usage in real time. Now, you can actually set power budgets so you limit that, and as the power systems reach the budget, they’ll automatically throttle back.
Now, that’s great for a single server, but what if you’re managing tens and hundreds and thousands of servers? Well, in that case, that’s where System Center comes in. So with System Center, we’re going to look at Operations Manager 2007 R2. And here what we actually see is the real-time power consumption of several of our servers. Down here on the bottom, what we can see is we’re actually managing an Intel box here in red, and then in the two blues, we’re managing a Dell 10th generation Power Edge Server, and a Dell 11th generation Power Edge Server.
In fact, Dell’s 11th generation Power Edge Server exposes advanced system functionality to a variety of partner solutions, including the Operations Manager management pack that we’re using here today.
What we’re going to do is we’re going to take a closer look at that 11th generation system. I’m going to uncheck the other two servers and I’m actually going to check and look at the power budget. So in this case, my actual power usage is in blue, in yellow is my power budget. And as you can see, as we near or even exceed that power budget, the system automatically throttles itself back. This is an example of that dynamic optimization that System Center can allow you to do by looking all the different servers’ power usage and looking at and setting up the power budgets for all these systems.
Now, it’s much more than just understanding and seeing this, but also optimizing our systems based on this power consumption. And it’s not just about optimizing for CPU, network, or disk IO, now we can take power consumption and power budgets into use.
Now, for a better example of that, we’re going to stay in Operations Manager. And what we’re going to see is we’re going to see a system here that’s going to have a critical state. Just had to refresh there. So we’ve got a critical state here on this box. And what we know is that this system has been consistently exceeding its power budget.
Now, we really want to reduce the power consumption, but we don’t want to lose any of the services that we’re running on that system. In this case, they’re actually virtual machines.
So what we want to do is live migrate those virtual machines off the affected servers onto our unaffected servers to balance out our power. And how we do that is to take this alert information from Operations Manager and put it right into Virtual Machine Manager 2008 R2 through what we call pro and pro tips.
So we’re going to go right over to Virtual Machine Manager. And if I click on the pro tips button, we actually have several pro tips here. We’re going to have a pro tip from Dell and have a couple pro tips from HP. In fact, these pro tips are based on Operations Manager management packs, so your pro-enabled management packs can be available from all our partners and even extensible by you for customer management packs. In fact, the Dell 11th-generation system exposes more than just power consumption, but other system-specific information that we can leverage.
Now, as we take a look at that pro tip, we’ll see it tells us what the server is affected, what the problem is, what our resolution is, and then we actually have an implement button. And with Virtual Machine Manager, we can automatically set the system to implement these pro tips based on severity and profile and policies that you se in Virtual Machine Manager.
So what I’m going to do is I’m going to go ahead and hit the implement button. It’s going to initialize that pro tip. I’ll go to the jobs and we’ll see that starting now. And what I’m actually going to do is I am going to restore down – pull this down. And what I have here is actually a video that I have linked off a UNC share that’s running right off that VM. And while the live migration occurs, we’re going to let the video run, and we’re not going to see any user-perceptible downtime, we’re not going to see any stutters or wipeouts.
So as the live migration progresses, what we really want to note is what’s the difference between VMware’s VMotion and the live migration that we have here is enabled through System Center. And with System Center, we can link the optimization, not just the CPU, but we can link the optimization to almost anything that Operations Manager can monitor: physical systems, local systems. Also, more importantly, application and service-level monitoring, give you end-to-end ability to optimize and move your virtual machines around, and that’s a capability that VMware just cannot match.
So what we see today is that System Center’s really enabled to help you manage, analyze, and optimize all the resources that you’re managing in your private cloud fabric, and that working with our partners, we’re going to bring together operating systems, physical systems, applications, network, and store management into a single set of tools that eliminates much of the cost and complexity from third-party solutions. Thank you. (Applause.)
BOB KELLY: Well done, great job.
So what Edwin just showed you is how virtualization changes a data center, and it’s really how we start to make abstracting the software from the hardware fabric relatively easy. And this notion of abstraction is really a foundational element to how we think about the private cloud.
Now, we’ve been talking about virtualization for quite a while. And today, most people think of virtualization as server virtualization, server hardware virtualization. And the truth is, we’re using virtualization in many respects. We’re using this notion of abstracting server hardware, OS hardware, and in fact even application from OS across our stack.
You know today that you can get application virtualization for the client side. Well, what’s really going to be very interesting as the data center moves forward is how we start to separate out or abstract server applications from the server fabric. That’s really cool stuff that’s going on.
In decoupling that application work load from the OS, you see two fundamental benefits: The first is X copy deployment. The ability where deploying and installing an application is, in fact, as simple as copying a file. The second is image-based management, where really you can dramatically reduce the total number of images that have to be managed, patched, maintained, et cetera. This will be a huge transformation in how the private cloud really starts to deliver business benefits to end customers.
So what I’d like to do now is invite one of our engineers on stage to give you a preview of this technology that we’re working on. So Bill Mariah, could you come on out, please?
Hi Bill.
BILL MARIAH: How’s it going?
BOB KELLY: Go get it. (Applause.)
BILL MARIAH: All right. We’ve heard about all these incredible things that are possible with virtualization. What about applications? We think about installing applications today, what does the world actually look like? It’s really still a script, you have a lot of manual steps, and multiple installers.
Well, we think it’s time to change that. What we want to do is decouple applications from the underlying operating system using application virtualization technology. Today, we want to show you two very specific things.
First, we’re going to deploy a complex, real-world application out into the data center as an image. It can be as simple as copying a couple files and effectively pressing play. And then once the application is out there and deployed, we wanted to use image-based management in order to keep everything up to date. We’re going to do this by actually taking an application, lifting it up off a running OS, sliding the OS out so I get a new updated OS underneath, and then dropping the application back down again.
Take a look. All right. So you’re familiar with Virtual Machine Manager. And within Virtual Machine Manager, we have the notion of the library. This is the repository where we store all of the assets that we need. We have virtual hard disks, templates, we store scripts – basically all the building blocks. What we’re doing now in this test version, this updated version that we’re starting to work on in our labs right now is actually add applications into the library as first-class citizens.
So now when you think of applications, they’re images. So they can be started, deployed, and managed on demand. So let’s look at what it takes to roll out an application in this new world.
So I click over to the virtual machine. I have an actual running virtual machine out there in my data center. I simply click on it. I right click, select deploy application, and now I have all the applications that I saw in my library.
So what I do is I just select, in this case, Fast ESP, it’s a powerful enterprise search application from Microsoft. Grab it, and click okay. And that’s it. So the application deployment has started.
Now think about that. If you really go through and consider what it takes, you would imagine that doing an application and trying to deploy it like this would involve rewriting it to some very strict app model or somehow otherwise taking it and making it a trivial application. But that’s where application virtualization technology comes in. What we’re going to do is use the sequencing process of application virtualization, which lets you take the existing installers, the existing scripts, take those and capture everything into a single image. And the beauty of these images is it’s not just the application, it’s not just the core application, it’s all the other pieces that go along with it.
So it’s comm components, if you have supporting Windows services, if you have database drivers, it all goes into that package so it remains solid and under control.
So if I jump in, I take what was previously my clean operating system, and you can see I now have Fast ESP up and running inside the operating system image. So my application has been deployed.
So this is amazing, if we think about it. And we’re the only people doing this right now. VMware is talking about it in part of a roadmap, but we’re actually running real code in our labs right now.
So now this is great. You have the application, it’s out there running, maybe there are hundreds of other applications out in my data center as well. We’ve been hearing today about all of the incredible benefits of Windows Server 2008 R2 for managing the fabric of your data center. But what would happen if we wanted to roll this out across our data center? Instead of going out and touching all the active and running servers, wouldn’t it be great if we could go back into our library, keep a couple of golden templates that we manage inside the lab, keep everything up to date there with all the latest patches, all the latest operating system updates, and use those as the way we deploy the operating system?
So we’ve done just that. We’ve actually created a few images, and now I’m going to go back, take a look at the virtual machine, which in this case you can see is running the applications. I simply right click on the operating system, click update operating system, and then from here, what I can do is select either a template or in this case I’m just going to take another running instance that I’ve deployed a few minutes ago. I grab that, I click okay, and the process is started. You can see that things are updating.
So this is pretty amazing. Now, to use an administrator, it’s as simple as this. Now, behind the scenes, we talked about before, what we’re actually doing is we’re taking the operating system and the application, lifting the app up along with all of its locally persisted states, sliding the old operating system out, sliding the new R2 operating system underneath, and then dropping the application back down again.
So if I look at this, I’m going to double-click on the original operating system that previously had Fast running, and you can see now this has gone back to its original, pristine, clean state.
If I close that back down and click on the second operating system, this is the one running R2, and if I look in here, you can see that Fast is now up and running inside of this operating system. (Applause.) Yeah, it’s pretty unbelievable, huh?
So this is it. What we’ve done is we’ve cleanly separated the application from the operating system to let you manage each of those independently. So you’re no longer managing all these permutations of operating system and application inside of a single image. You take each independently, and that way you only update them when they need to be updated, rather than when one or the other is actually updated. And that’s it. (Applause.)
BOB KELLY: Awesome, well done Bill. So I think you’re getting a good idea now what we mean by managing that fabric. The next part of the private cloud is the application platform and delivering the service that you actually have to deliver on that fabric.
For customers to get cloud-like experience with an on-premise or private cloud, they must in fact manage the applications the same way that customers experience them, not as individual components, but as an end-to-end, service-oriented experience.
So the principles of this kind of service delivery are availability is king. The service must always be available. This has business-level SLA requirements. It means that we can’t be just thinking about the component parts, and in fact be held responsible for the availability of the business service. It’s distributed, but not related. What does that mean? Distributed components structured together into a single service. It’s heterogeneous in nature. Physical, virtual, Windows, non-Windows. And finally, it’s elastic. IT must be able to expand and contract across your business needs.
These four elements are absolutely critical to be able to deliver that service and are foundational to the private cloud. So when we think about how this thing goes forward, it’s really very, very important to start thinking about things like TCO in a new way. In fact, I’m very proud of the work that the team has done. I just reviewed some of the data that they brought back. We’ve looked at TCO in a deep way in this sort of cloud-based world, and we found that you can manage more service through FTE and reduce cost per sever by over 60 percent when you use System Center to manage your highest-value applications. And when you focus on AD or file and print, you can manage as many as 300 more servers per FTE. And that’s because we’ve been thinking about moving more and more of the assets of management, more and more of the assets of what we do with System Center up to that service level and not to the box level.
So let’s talk about one of the latest updates to the System Center family. I’m announcing today the availability in 30 days of Operations Manager 2007 R2. This is a very exciting release for us. (Applause.)
Now, we’ve talked about the importance of applications. And when we think about it in the end-to-end context of delivering a service, we really are focusing on this release to think through the entirety of that end-to-end, available, heterogeneous service. So managing the service and delivering service-level monitoring is really critical. The Visio work you just saw from Edwin’s demo, really thinking about Visio as sort of a first-class citizen in looking at our IT fabric.
We’re focusing on extending beyond Windows. We promised you two or three MMS’s ago that we would really invest in taking these benefits of System Center, the low cost provider of a broad and deep management stack for the non-Windows environment. As a part of R2, we delivered that for Linux and Unix. And finally, really thinking end to end around how we ensure this heterogeneous environment not just with the tools that we deliver, but also with the tools from our partners.
We have a tremendous robust partner ecosystem. Those pro management packs, you just saw one that Edwin showed you from HP and Dell. Those are really compelling. And the way that you should think about those is the partner provides that knowledge, builds it into System Center, and now can automate a lot of the feature and function that we want it to enable from the hardware all the way through the service. Really very, very powerful stuff. And it becomes a foundational – Operations Manager now is really a foundational element to our entire management suite.
And we have tons of customers using this product in beta today. You saw CDW. We have Johns Hopkins, many, many, many more. And the interoperability that we focused on really enables System Center now to plug better into environments from Tivoli or from Remedy or from Open View. And we have those tools in beta today.
So I would like to now actually have another engineer come on board and show you really what it means to deliver the service using Operations Manager 2007. So join me in welcoming Lorenzo Ricci. (Applause.)
LORENZO RICCI: Thank you, Bob. Good morning. The life of an IT manager is not easy. Many of the challenges relate to the management of service levels and the management of work loads and services across a heterogeneous environment like a typical data center.
In the next few minutes, I’m going to show you how the imminent release of Operations Manager 2007 R2 brings new functionality and tools to address both of these challenges. In fact, let’s start with service-level monitoring.
What you see here is my corporate SharePoint portal. On it, I’ve published the new service-level dashboard which comes with Operations Manager 2007 R2.
As you can see up top, the list of applications and services my team is responsible for are displayed. At a glance, you can see that the accounting portal has not been meeting the agreed levels of service. Alongside it, the services I’m responsible for have been operating within the target of the agreed service level.
Now, this dashboard gives a great value in two key points: The first is that it dramatically reduces the cost in time and effort to produce, publish, grant access to and keep this information up to date.
Secondly, it not only leverages my current investment in SharePoint, but it just about eliminates the need for my customers and management to be granted access to or training on Operations Manager itself.
However, if you’re in IT operations like me, you know that I’m going to have to know way more information than displayed here to answer my customers and my management’s questions about the lack of availability of the accounting portal.
This is where I’ll switch to one of my favorite reports in Operations Manager 2007 R2. This is the service-level tracking summary report. At a glance, it gives you the same information you have seen in the dashboard. Differently from the dashboard, however, it allows me to compare different timeframes where my availability is being calculated. The month to date alongside the last seven days and the report ration which I’ve configured to be the last 24 hours.
From this report, I can expand and drill into different aspects of this application that I have configured service level objectives for. As you can see, there’s a service-level objective of four nines for the MySQL database for the Web site work flow, and from the Web site availability as measured by prospectives simulating my current customers. This is actually very, very important to me and with Operations Manager 2007 R2, I can leverage the new levels of skill that are supported that allow me to manage more than 1,000 Web sites from a single agent. This is phenomenal compared to the previous solution I was using, which not only didn’t scale to this level, but it charged me for every single URL I was monitoring.
But these key indicators show me that while the work flow is down, most likely because the work flow relies on the database, the current outage is caused by the database itself. So let’s drill in to see more about the outage.
From here, I can see more visualizations and information about the MySQL database, and for brevity, I’m going to pivot to the availability report. This availability report is great because it gives me the availability hour by hour over the last 24 hours. As you can see, the MySQL server hasn’t been doing too well. Yesterday it has been up and down. My team has been able to correct it, but it’s currently unavailable with a failure that started about a couple of hours ago.
Now, before I follow up with my team to correct the situation, let me show you how the availability of the database contributes to the availability of the whole accounting portal. For this, I’m going to switch to the accounting portal diagram view. From this view, you can appreciate how the accounting portal is a distributed application that leverages work loads from Windows and non-Windows platforms. In fact, you see on the right-hand side that along my Windows Servers, I have a SUSE Linux server contributing to this distributed application.
On the left-hand side, you see that beside my prospective monitors, which I’ve already discussed, I have an Apache Web server, and I have a MySQL database. Showing the full spectrum of Windows and the LAN stack of work loads.
To show you how seamlessly integrated the experience in monitoring non-Windows servers, let me open health explorer and show you what’s going on on this particular Linux server.
I’ll bring up health explorer. I’ll expand the availability, and I’m going to show you how alongside the native monitoring provided by Microsoft, which covers processor, memory, networking, and storage, our partners at Novell have enriched the monitoring of this system to cover the application layer, covering work loads such as the SUSE fire walls, file, and print services.
But as you can see in this case, there’s nothing going on on this machine. So let’s focus on the database failure.
I’m going to go back to the diagram view and select the MySQL database component. Now, if your Linux and Unix administrator colleagues are like mine, you will know that they are very sensitive to people touching their systems. At the same time, they’re very sensitive to being woken up at two o’clock in the morning for an outage which is easily fixed.
For this reason, I worked with them to leverage the role-based security features within Ops Manager to grant me access to a few key tasks to correct any operational failure on this particular server.
As you can see on the right-hand side, there’s a start MySQL server task. This task is provided by a management pack covering the MySQL server monitoring provided by Bridgeways. As I execute this task, my management server is going to communicate to the agent running on the SUSE Linux server to attempt to restart the MySQL server daemon.
As you can see, the task is executed with success. This will bring the availability of the MySQL database server back online momentarily.
The last thing you might have noticed is on the right-hand side, I have a yellow flag regarding one of my Windows servers. Let’s take a look at what that is about.
Once again, I’m going to open Health Explorer. And this time I notice that the failure is not software related, it is actually a hardware warning condition that the new management pack from HP, from the inside control management pack, has been reporting.
Now, being just a hardware failure, the way I’m going to act on it is by moving any virtual work load off of this machine and dispatching a technician to go and replace the cooling fan on this box.
So in summary, I showed you three key points of value, the Operations Manager 2007 R2 brings to my business. First, it not only dramatically reduced the time and effort to manage my service levels, but it also increased my transparency and trust between IT and the rest of the business.
Secondly, Operations Manager 2007 R2 provides a single product with which I can monitor my heterogeneous environment of Windows and non-Windows work loads.
Lastly, I showed you how partners like HP, Bridgeway, Dell, and Novell have enriched the monitoring provided by Microsoft to complete the full specters of hardware and software components in my data center. Thank you. (Applause.)
BOB KELLY: Great demo. That was awesome, well done. So I love that demo for lots of reasons, but not the least of which is it fulfills that promise that we told you a number of years ago about going cross platform and really taking System Center and the value of System Center and thinking through that entire piece of the infrastructure to move to service orientation.
The next element of platform or of platform cloud that I want to talk about is federation. We’ve talked about what it takes to manage your fabric and to deliver service. And we see this as your on ramp, if you will, to the cloud. But we also believe that these services will be so pervasive within your private cloud and in the public cloud that you’ll need to federate across that boundary.
You’ll be able to consume services from both of those clouds and build coherent experiences, applications or services, across that infrastructure. So our plan is to actually build consistent and coherent APIs across the Microsoft stack and with our Azure cloud, but we’re also going to be able to, through work with third-party hosters and through things like the DMTS work, to be able to actually now extend that and participate in other clouds. That is really why I’m so excited about what we’ve announced with the DMTS, as a founding member of the cloud incubator, we’re actually going to focus now on driving industry standards around how you communicate and collaborate and, in fact, federate applications across clouds.
The unique differentiator that Microsoft has here is the breadth of our stack, the depth of our stack, all managed with one coherent management experience, System Center. And when you see what we can do with that in a few moments, it’ll be so compelling for you to understand really where the future roadmap of System Center is going that it will really help you internalize the power of cloud and how you can start to take advantage of that.
So what I’d like to do now is actually bring out one of our hosting partners. Have a little Q & A with him, talk about how they’re using our stuff, and talk about the value they see from building their cloud on top of the Microsoft stack. So please join me in welcoming Dominic Foster, the CTO from Maximum AST to share his experiences. Hey, Dom. (Applause.)
DOMINIC FOSTER: Hi, Bob.
BOB KELLY: Great to see you, grab a seat. So tell us a little bit about Maximum AST and your role is there.
DOMINIC FOSTER: I am the CTO of Maximum AST, we’re a Microsoft-centric Web hosting provider. We’ve been in business since 2001. And at that time, we noticed there were pretty much two products out there on the Microsoft world. You had the shared, low-end, not secure, bad-performance shared hosting, and then you had the high-end, dedicated hosting, which was out of the reach of most people.
BOB KELLY: Right.
DOMINIC FOSTER: So we wanted to try and bridge the gap between the two of them. So we came out with a product that was in between the two, high performance built on top of Microsoft technology at that time that were allowing it to be very secure, giving really good performance, giving good hardware, and then we noticed, hey, we’ve got this rack of servers. We love it, it’s great, but our customers are outgrowing it.
So now we’ve got to start rolling out dedicated servers. So our data center footprint started to grow. We started using power, we started using tooling, we started buying hardware and spending a lot of money and filling up data centers.
So about that time a few years ago, we came to MMS and we saw Hyper-V was the roadmap that you guys laid out. And we said, “We’ve got to attach to this. It’s going to shrink our footprint and really give us a product that we can stand behind.”
BOB KELLY: That’s fantastic. Now, as a result of that, you’re delivering services to customers in 60 countries globally.
DOMINIC FOSTER: 60 different countries and we’ve got about 2500 servers right now.
BOB KELLY: So how have things changed for you and your business in recent years?
DOMINIC FOSTER: In recent years, we’ve seen the level of service has to be higher. Eight years ago, hey, if a Web site is down, you’ll go back to it later. In the on-demand society we’re in, if somebody goes to a Web site that’s down, they’re not going back to it. So that POS has to be extremely high.
BOB KELLY: So focusing on service and service-level agreement with your customers became sort of your next differentiator, if you will, for your customer base?
DOMINIC FOSTER: Absolutely. They require higher content availability. They want that HA out of the cloud. They want that monitoring.
BOB KELLY: I see. And so you made a huge bet on Microsoft. Right? Why don’t you tell us a little bit about your infrastructure and why you made that bet.
DOMINIC FOSTER: We definitely made a big bet on Microsoft, but we didn’t just swallow the Kool-Aid. We looked at all the vendors that were out there, we saw the roadmap that you were coming down with with Hyper-V, and more importantly in our infrastructure, we’ve got System Center. So we’re managing physical servers and dedicated servers with one single pane of glass. For us, it was a no brainer.
BOB KELLY: Got it. I also understand you’re big users of Power Shell.
DOMINIC FOSTER: Absolutely. We love Power Shell. Power Shell is great. Anything that you can do on Windows Server or System Center to the GUI, you can do with Power Shell. We built our whole automation process through Power Shell.
BOB KELLY: So what do customers ask for you to do today? How does this fit within the discussion that we’ve had today around cloud?
DOMINIC FOSTER: The cloud is a little bit confusing to some people. So we try and educate them as best we can. They want that HA. They don’t want to fuss around with clustering. They want to just make sure the servers are going to stay up. They want to make sure that it’s very elastic, very scalable in and out as the demand needs. And they really want to blur the lines between on-site and in the cloud.
BOB KELLY: I mean, that’s one of the sort of unique things that we’ve had a lot of fun talking about today, but also as we’ve been thinking through this is really the notion that you can take advantage of that elastic nature of the technology that we’re building to deliver service, deliver a public and private cloud for your own customers.
DOMINIC FOSTER: Oh, absolutely. Microsoft does a great job of extending that out. It’s not just Microsoft selling something. As partners and as customers, you can attach to it and up sell.
BOB KELLY: Super. So what are you most excited about on the roadmap?
DOMINIC FOSTER: On the roadmap? There’s lots of great things. Everything you’ve been talking about, but the big thing for us, Hyper-V, live migration is huge. Adding hot storage, awesome. System Center, some of the pieces you guys have coming down are awesome. Virtual Machine Manager, blurring that line, federating the cloud between private and public, which we’ll see in a little bit, is awesome. The pro tips that you guys are tweaking and making better, every single release it gets better and better.
BOB KELLY: Dom, awesome. Thanks very much. We really appreciate you coming all the way out.
DOMINIC FOSTER: Thank you.
BOB KELLY: Give it up for Dom, please. (Applause.)
So what’s really cool is not just to hear how Maximum AST is thinking about the transformation of their business and transformation of the use of IT, but actually now we’re going to give you another tech preview of some really interesting ways you can start thinking about your investments today in System Center and where that takes you as we move forward with the public and private cloud.
What I’d like to do now is invite Michael Michael out who is going to come out and give us a really cool demonstration of some future technology around Virtual Machine Manager. Michael? (Applause.)
MICHAEL MICHAEL: Hi, Bob. Thank you. Good morning everyone. Isn’t it exciting to be here in Las Vegas for MMS? (Applause.)
Today I’m going to show you something even more exciting. We’re going to give you insight into technology working in our lab today. This technology is going to be part of a future release of System Center Virtual Machine Manager. What we’re going to show you is how we will enable you to take advantage of the Microsoft cloud strategy.
This strategy will leverage all your private cloud resources as well as the public cloud environment such as offered by Maximum AST.
I have a number of services running in my data center today, and one of the key challenges that we’re all used to facing is how it is difficult to rapidly respond to increasing capacity requirements when I maxed out everything in my data center. If I don’t have a server, a server farm or a cluster and my application is demanding five more, then I’m in trouble.
This is exactly what has happened to me today, so I’m going to take advantage of Maximum AST’s public cloud hosting capabilities to give me the extra capacity that I need.
I’ve gone ahead and set up an account with Maximum AST, which has provisioned additional resources and capacity for me to use. Let’s take a look at the Maximum AST self-service portal. I can log in here and view my account information.
If I click on the Max V cloud overview, I get an overview of both the physical and the virtual servers that Maximum AST has provisioned for me. I can manage my VMs here, I can view resource utilization, or I can put in a request for additional capacity.
Maximum AST is part of a set of hosters that are building public cloud hosting capabilities through deep partnerships with Microsoft.
In addition, Maximum AST is integrated with System Center and is still on top of System Center. What that means is that I can take advantage of the extra capacity they have provisioned for me for the familiar System Center Virtual Machine Manager console which is what I’m using for my private cloud today.
Let’s take a look at the System Center Virtual Machine Manager. As you can see, I’m already managing here a set of hosts and virtual machines under the private cloud host group. Let’s go ahead and add our cloud resources to this environment. If you’re familiar with Virtual Machine Manager, you will notice that there’s a new action here called “add public resources.” These actions will allow me to federate the private cloud with the public cloud.
Let’s go ahead and click on that. My account information is already pre-populated, so all I have to do is enter the Virtual Machine Manager server name and my password. As you notice, Virtual Machine Manager has immediately begun to show the extra capacities that Maximum AST has provisioned for me within the same console. The host with the cloud icons are part of the public cloud hosted by Maximum AST, while the host with the regular cloud server icon are part of my private cloud.
This is an integrated view of both virtual as well as physical servers both on and off-premise. Now how cool is that? (Applause.)
Vendors like VMware want you to believe that you have to revolutionize both the data center and the cloud in order to take advantage of scenarios like this. They want you to rearchitect your entire infrastructure. Our existing System Center customers have the foundation today by which to seamlessly move from the private cloud to the public cloud.
It is great that I can manage my entire infrastructure both on- and off-premise using the same console. But now I would like to move a virtual machine from the private cloud to the public cloud in order to free up some capacity on some of my overloaded servers.
Let’s go ahead and pick virtual machine Web server and click on the migrate action in order to move it up to the public cloud. Because Maximum AST is also built on top of System Center, this operation is entire seamless and it can take advantage of System Center features like intelligent placement.
I will go ahead and pick the highest rated host from Maximum AST as the target host for my migration. Let’s pick the default path for the virtual machine and pick a virtual network.
Now we’re ready to move the virtual machine from the private cloud to the public cloud. Notice that the VM is already under migration and will move from the on-premise host to this cloud host.
While this is federating up to the public cloud, it’s going to take a little bit of time. But once it’s complete, my virtual machine will be there and it will be available for my organization to use.
A lot of you are going to start thinking, “Are there any limitations to this approach? Are there any restrictions? Can I still manage my VM?” There are absolutely no limitations. I can manage the same virtual machine using the same set of tools whether it’s in the private cloud or the public cloud. My virtual machine will be able to be using the same System Center features that I use today.
However, managing a virtual machine from the private cloud to the public cloud is not only about migration. Microsoft is making deep investments and deep bets into security and identity federation in order to complete this scenario. There it is, by the way, my virtual machine is already migrated up to the public cloud and it is available for me to use. (Applause.)
I have been able to increase the capacity of my data center without having to rewrite my application because Maximum AST is built on top of the same platform and the same technology I am using in my private cloud today. That means that I can get extra capacity and the capacity can change over time as my organization’s needs evolve.
This preview of a future release of System Center Virtual Machine Manager shows you the potential of the integration of the private cloud with the public cloud. You can get started today by taking advantage of Maximum AST’s public cloud hosting capabilities. Thank you. (Applause.)
BOB KELLY: Awesome. I love that demo. I love that demo. To me, that shows you the power of the way that we’ve been thinking about these scenarios.
We talk about the private cloud fabric. We talk about delivering service. We talk about the integration of being able to now have private and public clouds federated and then managed as a singular environment. Nobody else in the industry is doing this. That is a really powerful statement and it’s one that you all can feel confident in that you’re on the right path.
You just saw from Michael how that same tool you use in your data center can be the tool that you manage across this environment. And we believe fundamentally that the assets we have in System Center, in Active Directory, in our security infrastructure and in our management infrastructure will be key differentiators to deliver this end-to-end experience.
And so today I’m announcing two things, two things to help get started on this path. The first is the Dynamic Data Center Toolkit for hosters. This is available for today and for hosters to ramp up their services to take advantage of these technologies and this piece of guidance for how to go roll out and deploy this fabric.
And available in 90 days is the Dynamic Data Center Toolkit for enterprises. And this will be for the businesses who want to take advantage of and get on the path to private clouds in their data center. This will include implementation guidance, reference architectures diagrams and some software around portal, around chargeback, et cetera.
We’re really very excited about this technology and the tools that we’re delivering today for you to start taking advantage of these assets and move towards cloud computing in your own environment.
The things that you really should be thinking about on this path to cloud computing is Microsoft’s approach to solving this problem for you. Integration from the infrastructure to the application model. That’s the breadth and depth of our stack. Management from physical to virtual and across clouds, and in fact also involving non-Windows environments. And three, we will deliver this capability to you at the lowest price and at the best value.
That’s our promise to you. So what can you do now? Virtualize. Virtualize, virtualize, virtualize. Get going . Take advantage of the technology that’s been released and the ones that are coming to start yourself on that path to cloud-based computing.
Two, automate what you can. Not everything can be automated, but automate what you can so that you start to actually move out of being box management into service-level management and start to use policy to enforce what you really want IT to do for the business.
Three: Drive these new capabilities for your organization. You are at a unique time in IT. We don’t call this a revolution, we call it an evolution, but it is a unique time where we’re seeing many, many, many trends come together. And we are set up for what could be one of the most compelling times to be in IT in the history of man. We’re really excited about what we’re doing to help you.
Thank you very much for what you do every day. Thanks for allowing me to come and spend some time with you at MMS, and enjoy the rest of the show. Thank you. (Applause.)
END