Bob Muglia: Microsoft Management Summit 2008 Keynote

Remarks by Bob Muglia, Senior Vice President, Server and Tools Business
Las Vegas, April 29, 2008

ANNOUNCER: Ladies and gentlemen, please welcome Bob Muglia, Senior Vice President, Microsoft Corporation. (Applause.)

BOB MUGLIA: Good morning, MMS! Good morning and welcome to MMS. We have a great couple of days, few days for you here. We think there’s a lot of great products for you to take a look at, and a lot of great things for you to do, working together.

One of the things that we have done over the last, oh, two and a half months or so is we’ve been going around the world launching several products, launching Windows Server 2008, SQL Server 2008, and Visual Studio 2008.

The way we’ve done that is we’ve gone to over 200 cities and had conferences with our customers. In the process we’ve really learned a lot, and we’ve celebrated a lot. It’s been a celebration, and it’s really been a celebration about the heroes that make up IT.

That really is the way Microsoft views IT, because we recognize that business is run by the technology systems that sit behind it, and it is the IT staff that makes that happen, it’s you who make that happen.

So, we’ve been celebrating that, treating everyone, all of you as heroes, which you certainly are this week here at MMS. So, it’s been great.

One of the things that we’ve recognized over time is that our role is as a software provider in the industry, a platform provider, management provider, a company that enables partners across the industry to help you solve your problems. Your role is to make the magic happen.

You’re the ones that make your companies sing in the things that you really can do within your business.

We’ve done this, we’ve helped that by listening to you. We always come back to listening to you. When I look at the products that we’re introducing — SQL Server, Windows Server, all of System Center — these are really products that were designed by you in a lot of senses and built very much for you, and I think you’ll learn about that today, this morning, and over the next few days.

Now, one of the things we’ve been working on for some time is Dynamic IT, and Dynamic IT is the evolution of the Dynamic Systems Initiative. We’ve been talking about that here for the last five years. When we first introduced this, we said it was a 10-year vision, and we had a roadmap for that. We’re five years into it. I think it’s been a great five years; we’ve made a lot of progress over that period of time.

One of the most important things we’ve done is recognized that we can help to provide you with a roadmap to the success of your IT. We call that the optimization models, and there are four stages really. It starts with basic. When most companies look at the state of their IT, they’re really in a pretty basic state; probably not you, but certainly many of the other folks that aren’t here today are in a pretty basic state. In that world IT is really a cost center.

As you evolve your IT and make it better, take advantage of what technology can provide, you can turn it into an efficient cost center, ultimately working towards making that a business enabler, and then what really companies see is this IT as a strategic asset.

So, we have these optimization models. They’re on the Web. You can take a look at them. They’re a guideline for you to understand how you are running your company relative to the best practices that exist in the industry.

We’ve learned that when companies take advantage of these best practices, when they implement these technologies within their organization, they reduce their costs and make their IT systems much, much more effective for their organization.

Let’s start by taking a look at how one company has utilized Dynamic IT to improve the way they work within their company. Let’s run the video.

(Video segment.)

BOB MUGLIA: That’s one customer who has looked at the optimization models and really used them to help improve their own effectiveness. I really do encourage you to look at this, because it’s something that you can very easily take advantage of, and really, like I said, compare yourself to best practices that exist in the industry.

Now, this morning what I’m going to do is go through the evolution of the datacenter. This talk is all about the datacenter. We’re going to talk about the evolution of the datacenter from where it is today to an environment which is really dynamic and the steps that we are taking to help you to enable that within your company.

Tomorrow, Brad Anderson, General Manager of System Center, will come up and talk about the desktop and how you can make the desktop environment within your company much more effective and reduce your costs considerably for your end users.

So, let’s sort of evolve through the stages of the datacenter. Let’s take a look at the different components that exist.

Dynamic IT Datacenter Components

First there’s the physical environment, all the hardware that exists, and we will see an evolution of the hardware. Certainly we’ve seen the introduction of blades as an important form factor. Larger datacenters, the forms continue to change to allow for much more efficient cooling and reduced energy consumption. A lot of great things happening with many-core processors, and the incredible power that is now available to IT shop to solve problems, business problems that we simply couldn’t solve before.

So, the physical environment continues to change, and we’re working very closely with all of our hardware partners to track the advancements they’re making, and then enable that in our operating systems, in our management tools, as well as in the applications.

Virtualization is key. When we talk about virtualization today, people see an opportunity to reduce costs and gain significant efficiency in their environment.

I do want to highlight that when people talk about virtualization, they are usually speaking of what I would call hardware virtualization or server virtualization, so virtualizing the operating system itself, and that does allow for a great deal of consolidation, and also simplify the ability to place images on a given physical computer. So, it’s core to the evolution of a dynamic datacenter.

Now, sometimes people think that hardware virtualization is enough to really solve the problems of dynamic placement of applications within a datacenter environment, but it really isn’t. It’s an important step. It is an incredible enabler, it is an amazingly important enabling technology, which is going very mainstream right now, and it’s very important, but it is by itself really insufficient to solve the problems of the dynamic datacenter.

To really solve those problems there are two more things that are required. The first is the virtualization of the applications themselves, taking the applications that exist within the server environment and state separating them from the rest of the operating system. When you place an application out there, you really want that state to be separated, and we’ll talk a little more about how we’re doing that and what the steps are.

The third piece, which is very critical, is relay the models, because systems don’t understand today the components of applications. This has been something we’ve talked about with DSI over a number of years. It is a remarkable thing that computer systems — Windows, Linux, UNIX, even mainframes — don’t really have a full conceptualized model of all the components of the applications and all their interrelationships and have that model available in a standardized way that exists through the lifecycle of the application, and exists within every stage of the application system itself, so the operating system, the application itself, all having a consistent, standardized view of those models.

So, this morning what I’m going to do is take you through those stages, and talk about the advancements we’re making, the work we’re doing with the rest of the industry to fully enable this within your environment.

We’re going to start with the physical datacenter and the placement of operating system images on the physical datacenter.

Dinner Now Demo

In order to do this we’ll run a demo. We’re going to have three demos. The demos will all have a consistent company associated with it. The company is a company called Dinner Now. It’s a company that allows you to place an order over the Web for dinner from a number of restaurants, and then have that delivered to your doorstep. This company, like many others, has a datacenter environment that consists of Windows, as well as Linux and UNIX. It’s a heterogeneous environment. We’ll talk about how you might think about managing that.

With that, what I’d like to do is invite Michael Kelly up to talk about Dinner Now. Michael? Good morning, Michael.

MICHAEL KELLY: Good morning. (Applause.) Thank you, Bob.

Dinner Now is embracing Microsoft’s Dynamic IT model, and toward that end they’re going to virtualize a large number of servers in their datacenter. They’re going to deploy Windows Server 2008 and Hyper-V. Of course, System Center Configuration Managers are a great way to accomplish that operating system deployment.

This is the Configuration Manager console. Hopefully many of you are getting very familiar with this console.

I’m going to open a task sequence, which is the mechanism that Configuration Manager uses to do operating system deployments.

Now, this task sequence follows the standard paradigm for a typical task sequence. We’re going to reboot into Windows PE. We’ll apply an operating system image. In this case that will be Windows Server 2008. And eventually we’ll boot up into that new operating system, and install the latest software updates and the applications that are appropriate for this computer.

Now, this task sequence can be used for doing desktop deployment or laptop deployment, and as well as servers in a datacenter.

When I’m deploying servers in a datacenter, there are typically some unique requirements that I have. I’ll probably need to do some hardware configuration before I actually lay down the operating system image, and I’ll probably need to set up some server roles that are appropriate for that server and its usage in the datacenter. So, let’s take a look at how Configuration Manager can be used to accomplish both of those tasks.

Configuration Manager can run custom scripts, and with custom scripts I can deploy most any hardware. But scripts take some work. If you’re like me, you have to get Notepad out, you have to write the script, you have to debug it, you have to get everything working.

So, Microsoft has partnered with several of our key hardware OEMs to add built-in UI into the task sequence editor so that you don’t have to do that scripting work.

I’m going to show on the add menu here we now have a Dell deployment activity here, an action that I can add to my task sequence to configure my PowerEdge Servers.

The first thing I’m going to do is set up a RAID 5 array. So, I’m going to choose RAID configuration. I’m going to select set, and I’m going to choose an INI file that is already set up as part of this configuration pack from Dell. I’ll choose that INI file, click apply, and I’m done. That step as part of my task sequence will set up this RAID array.

BOB MUGLIA: So, one of the things we’ve done is worked across the industry with our hardware partners. Dell has done a great job on this, but we’ve worked with the other hardware partners as well to set it up so that it’s very straightforward for you to do these deployments in an automated way, and they’ve done a lot of configuration work for you.

MICHAEL KELLY: The next thing I need to do, particularly if I’m going to deploy Hyper-V, there are some bios settings that I need to make sure are correct. In particular modern processors have this virtualization flag. So, I’m going to add another step to this task sequence, and in this case I’m going to configure the bios settings.

Again I’m going to choose set, I’m going to pick an INI file that’s got a lot of these settings in it, and let’s take a look at what this INI file looks like; lots of different settings. Now, the one I’m interested in, of course, is virtualization. I’m going to scroll down to the end of the list here, and right here at the bottom is the virtualization, which is disabled at the moment. I’m going to change that to enable. We’ll click “okay,” click “apply,” and I’ve now added that step to my task sequence. So, when I’ve used this task sequence to do an OS deployment, this will make sure that that flag really is set and that when I go to deploy Hyper-V, it will work properly. This saves me from having to go physically to each computer and go into the bios screens and then manually turn on that flag.

So, this wraps up the hardware configuration. Once I’ve accomplished this, then I’m ready to move onto the server role deployment. This has been done with this Dell Server Deployment Pack, and, in fact, this is going to be available in the next couple of weeks here in beta form on the Dell Web site, and you’ll be able to download it and start to play with it.

So, let’s take a look at server role configuration. Server roles have been formalized in Windows Server 2008, and there’s a great tool that’s built into Windows Server 2008, the Server Manager, that allows you to install server roles, configure server roles, and do all of that stuff.

But we want to automate the server role installation process as part of the task sequence here.

BOB MUGLIA: When we designed Server Manager, we really thought about it as something that a medium-sized business would use when they’re configuring an individual server, but for a larger company that has tens or even hundreds or more servers, it’s really not appropriate. You need something more that will go across all those servers, and that’s what SESAM is about.

MICHAEL KELLY: So, to do that server role deployment I’m going to show you the Microsoft Deployment Toolkit, which is an add-in to Configuration Manager. If you’re not familiar with the Microsoft Deployment Toolkit, think of it as a successor to Business Desktop Deployment or BDD. I’m sure many of you are familiar with BDD or have used BDD.

So, I’m going to go down to the Microsoft Deployment Toolkit, and again I’ve got some new actions that are available to me in my task sequence editor, and I’m going to choose “install roles and features.” So, I’m going to add this step to our task sequence. You can see all the different server roles that are available: Active Directory domain controller, you see DHCP server, DNS, and, of course, Hyper-V.

If I want to install Hyper-V, I simply check the box, click apply, and I’m done. Hyper-V is one of the simpler roles that doesn’t require any additional configuration. Of course, having set this up on some servers, you’re probably going to use System Center Virtual Machine Manager then to create the VMs and to manage Hyper-V on an ongoing basis.

Now, some roles though do require some additional configuration, so let me just show you an example of that. If I had set up Active Directory, then I could add another step to my task sequence to configure Active Directory Domain Services. I’ll add this step to my task sequence, I’ll type in the appropriate parameters, and very simply I am done.

Again, I can click apply and I have added that appropriate step to my task sequence to configure Active Directory. No scripting is needed, no getting into Notepad, no messing around with dcpromo in order to get the Active Directory stuff set up properly.

BOB MUGLIA: So, all of these things, all of these steps are part of a consistent set of tasks that you can use to deploy the variety of roles that you have within your environment on all the physical servers.

MICHAEL KELLY: So, that now wraps up my task sequence. I’m going to click “okay.”

Now I need to deploy this task sequence to my servers. In fact, as part of Dinner Now, we’re expecting a lot of orders to come in this week during this show. You guys and gals out there are going to get the midnight munchies, and you’re going to want to place an order. So, we want to get this deployed to lots of servers.

To do that we’re going to use a new capability that’s coming in the R2 release of System Center Configuration Manager. That capability is multicast.

Let’s go down and look at the properties of a distribution point in Config Manager, and as you can see, we now have a multicast tab available on the distribution point. This gives me the ability to enable multicast on this distribution point, and set the various parameters that are appropriate for the multicast protocol.

BOB MUGLIA: Multicast has been one of the things you’ve been asking for in Configuration Manager for a while. It’s an example of us listening to your feedback, and it is really something that dramatically reduces the time it takes to deploy tens or even hundreds or servers.

MICHAEL KELLY: And, of course, it does a great job of reducing the load on your network as well when you’re deploying lots of servers.

So, having created the task sequence, got multicast set up, then I would advertise that task sequence to a collection of computers, just like I would do with any other thing in Configuration Manager.

So, let’s assume that’s been done, and then I can come back and monitor the overall progress of that OS deployment. As you see here, I’ve got a task sequence that I advertised earlier. You can see that it was targeted at nine servers, and you can see the overall status of those servers from the central console. So, you as the IT admin can get an overall picture of what’s going on, what’s succeeding, what’s still in progress, and if there are any failures that you might need to go deal with.

So, that’s the way that Configuration Manager can help you begin to virtualize your datacenter. We can help you set up the hardware ahead of time, get the pre-OS hardware configuration done, lay down that operating system image, and then automate the configuration and installation of the server roles that are appropriate for what you’re doing.

Thank you very much, Bob.

BOB MUGLIA: Great. Thank you very much, Michael. (Applause.)

So, that gives you an idea of the evolution of Configuration Manager to really help you to automate the deployment of images within your datacenter. We’ve done a lot in Windows Server 2008 with the Windows Desktop Services to really simplify this and provide you with a great deal of flexibility to deploy both full images, as well as Server Core, which is a minimalized version of Windows Server that provides no UI and just the basics of what you need to run a wide variety of roles.

Now we’re going to move on and talk a little bit about virtualization.

So, the next step is really to think about the hardware virtualization environment, and how we can work with you to really solve the breadth of the problems that you have.

As I’m sure everybody knows, we are introducing this summer Hyper-V, which is a very modern hypervisor built into Windows Server 2008, to provide you with highly efficient virtualized machines and the ability to run a wide variety of heterogeneous systems, Windows, Linux, on top of this Hyper-V hypervisor.

We’ve been really pleased by the reaction we’ve seen from the industry as we’ve been through beta of Hyper-V, and really, really pleased with the stability and performance we’re seeing.

MSDN and TechNet are full virtualized right now with Hyper-V, and we had a dramatic cost savings when we went and did the virtualization of these very high volume Web sites within Microsoft. Of course, we’ll be deploying it throughout our datacenters both for our internal IT as well as for our external facing properties in the coming months.

We’ve been really pleased with the performance, frankly kind of surprised at how good the performance has been. You have something that’s the first version of something; we know we can get the quality right, but sometimes it takes a little while to be fully competitive from a performance perspective. Let’s face it, with VMware we have a competitor that’s been working at this.

We ran our first benchmarks last fall, and were really surprised to find that our performance overall was quite competitive with ESX. I’m a guy that’s been around for a long time; I remember the old days of Windows NT and trying to be competitive with Novell in file sharing, and it took us till NT4, three releases, to be competitive.

We will be competitive in the performance of virtualization with the first release of Hyper-V. So, we’re very, very proud and very excited about that.

We also feel that it’s important to have a very consistent and complete, fully integrated management solution to go along with the Hyper-V infrastructure, and for that we have System Center Virtual Machine Manager.

Demo: System Center Virtual Machine Manager 2008

I’m pleased to announce today the availability of the beta release of System Center Virtual Machine Manager 2008, which will manage Hyper-V, and also manage VMware ESX.

With that, what I’d like to do is introduce Rakesh Malhotra to come on out and show us System Center Virtual Machine Manager 2008. Rakesh? (Applause.) Good morning, sir!

RAKESH MALHOTRA: Right, thanks, Bob.

So, with System Center one of the things we’re really trying to accomplish is to unify the management of physical and virtual machines. So, you saw earlier how you could use a tool like Configuration Manager to rapidly provision new physical servers in your datacenter using that tool, and enable the Hyper-V role.

So, I’m going to go ahead and add virtual machines to that mix, and show you how that enables some dynamic new capabilities.

So, what you’re seeing here is System Center Virtual Machine Manager 2008. This is our console for managing your virtualized infrastructure.

In the 2007 version of the product we incorporated functionality that allowed you to do things like server consolidation and rapid provisioning and drive agility with virtual server. We’ve improved on every one of those features by take advantage of the new functionality available in Hyper-V built right into Server 2008.

In addition, our customers have been telling us that they want to use a single console to manage their entire infrastructure, regardless of the hypervisor. So, you can manage virtual server, you can manage Hyper-V, and you can even manage VMware with Virtual Machine Manager 2008.

To demonstrate that point I’m going to swap over here to the VMware virtual infrastructure client, and this is their management tool for managing VMware virtual machines.

The thing I want you to pay attention to is that I have a datacenter here in New York, and I have several segments in this datacenter — development, production, and staging — and in production I happen to have a three node ESX cluster with several virtual machines running on that cluster. I can see information about memory, CPU utilization, all of that status information directly in this console.

I’m going to swap back to the Virtual Machine Manager console, and if you take a close look again you’ll see that I have that New York datacenter represented here as well. I have development, production, and staging, but I see more of my environment. We’ve synchronized with the VMware infrastructure, so you do see this three node ESX cluster that I showed you in the other console, and there it is. I also see a brand new three node Hyper-V cluster that I’m managing right alongside it.

So, I get this seamless and blended management experience. That’s really one of our goals is the environments put together. (Applause.)

BOB MUGLIA: Yeah, it’s sort of interesting. I mean, this is based on your feedback. You asked us for this, but there’s a certain irony in Microsoft being the first provider to manage both ESX and VMware as well as Hyper-V. So, we’re taking that heterogeneous approach very seriously in virtualization.

RAKESH MALHOTRA: Absolutely. One of the new features in Hyper-V is the ability to rapidly move virtual machines between physical hardware with very little downtime. We call the feature Quick Migration. I’m going to show you how you can drive that through Virtual Machine Manager.

I’ll right-click on a Hyper-V VM, I’ll click “migrate”, and what happens here is Virtual Machine Manager is going to give me a list of recommendations on where to move the VM to, uses a feature that we call Intelligent Placement, and it’s automatically looking at the requirements of the virtual machine, and then it’s comparing that with the capacity that I have in my datacenter; puts those two together, and it’s crunching all the numbers for me — CPU, memory, disk, all that stuff — and it gives me a star rating system that’s really easy to consume and simple to work with.

So, all I need to do is pick the top rated physical hosts, click next, and then before I actually kick off this quick migration, I want to point out at the end of every one of our wizards you have the ability to generate a PowerShell commandlet, PowerShell script to perform the equivalent operation of what the wizard is going to do.

BOB MUGLIA: In fact, that is what the wizard does, PowerShell.

RAKESH MALHOTRA: Absolutely. This entire user interface is built right on top of PowerShell, so our API is your API. (Applause.)

Go ahead and kick off that quick migration, and what you see here in the user interface is the VM that we selected to migrate those into the under migration state, and in a few seconds you’ll see it pop over from the Hyper-V 01 host down to Hyper-V 03, which is the host that Intelligent Placement selected for me to migrate it to.

So, there I have about 10 seconds I’ve moved a virtual machine between physical Hyper-V servers through the VMM console.

I’m going to show you VMware experience. We actually support a feature called VMotion, and that allows you to move virtual machines while they’re running between physical hardware, without any end user noticeable downtime. You can drive that with Virtual Machine Manager as well.

So, I’ll right-click on one of the VMware virtual machines, again I’ll click migrate, and just like the Hyper-V case, Intelligent Placement pops up here, and it gives me recommendations on where to move the VM, and I get recommendations again across ESX and Hyper-V hosts.

You’ll notice here that the transfer type is listed as live. That means that we’re going to use VMotion to actually drive the transfer.

I’ll click next, again the PowerShell opportunity is here as well to view the script. I’ll go off and kick that VMotion, and what you’re going to see is again just like in the Hyper-V case this virtual machine will pop down from ESX 01 down to the one I selected with Intelligent Placement, and with VMotion I actually do this during business hours, right, because the users aren’t going to notice any downtime while it’s happening.

BOB MUGLIA: Now, we will add live migration to Hyper-V. We’ve got it running in the lab. We weren’t able to get it in the first release. We will add that capability to the underlying Hyper-V system. In fact, Hyper-V is very competitive, and will get even more competitive in adding more features over time.

The point that I think is important here is that when we built System Center Virtual Machine Manager to manage ESX, we made it a full first class management tool for the VMware ESX environment.

You need to use VMware tools to provision their environment, but once the system is up and running, you can have one integrated management console that manages both Hyper-V, Microsoft virtual machines, as well as ESX, and you get all the features that VMware provides, plus you get additional features such as PowerShell scripting, and the best placement option that Rakesh just showed.

So, it’s one integrated experience, and it’s also integrated with the rest of System Center, something very differentiated as well. (Applause.)

RAKESH MALHOTRA: Right. You saw here I’ve now driven VMotion, for example, through this console. So, again, being seamless and blended is very important. So migrating existing virtual machines is great. But I also want the ability to rapidly provision new virtual machines. And one of the ways we’ve made that easier is by implementing a feature that we call the library. And the library is where I store all the building blocks for new virtual machines, that includes ISOs, PowerShell scripts, templates, virtual hard disks, even offline virtual machines. Again, since it’s integrated, I can put my VMware stuff in here, I can put my Hyper-V stuff, and even Virtual Server. So creating that blended and seamless experience was important.

I’ll actually go ahead and click on one of these templates. It’s a particularly interesting type of object, say create new VMs from them. If I go to the hardware configuration for this, you’ll see that because I have Hyper-V, I can now configure things that I couldn’t do in Virtual Server, such as BIOS settings, and boot order, I can multi-proc VMs. One of the features that we added in VMM 2008 is the ability to quickly and easily make virtual machines highly available. And so if I click this check box that says “make this VM highly available” every virtual machine created from this template will automatically be placed on a clustered host, either ESX or Hyper-V, and any clustering resources that need to get configured are automatically handled for you. So no more 12-step processes, or complicated white papers to read about high availability, it’s just a single click.

BOB MUGLIA: High availability is obviously  (applause)  yes, clap on that. High availability is obviously something that’s incredibly important in a virtualized environment, and with Windows Server 2008 enterprise, we have made building high availability clusters very, very simple, and we’ve extended that with Virtual Machine Manager to make it incredibility simple to deploy in a high availability environment, both in WS ’08 as well as in ESX.

RAKESH MALHOTRA: And high availability is important, creating new VMs is important, migrating is important, but you know the real power of dynamic management is having the system automatically do the stuff for me based on its knowledge of the environment, based on the changing demands of my data center. So we’ve implemented that functionality by deeply integrating Virtual Machine Manager with Operations Manager.

Let’s swap over to the Operations Manager console. This is a diagram view of the Dinner Now environment that we’ve been working with. It happens to be the order processing system for Dinner Now. And if I actually expand some of the nodes here, you’ll see that it’s a typical three-tier application. I have a couple of Web servers on the front end over here, my app server in the middle, and a SQL Server on the back end. These kind of bluish-green icons indicate that those servers are virtual machines, my SQL Server is actually a physical machine. So, again, it’s a blended type of environment. What we did is, Ops Manager has this deep information, knowledge about the VM, about the applications running within the VM, and that’s actually very critical in a management solution. It’s absolutely essential that it understand the applications before it makes configuration changes, before it makes resource changes, otherwise you end up with situations where you might have unintended consequences, or unexpected downtime. So the knowledge is really critical.

What we did is blended the knowledge that you see in Ops Manager, and tied that with the agility that you can drive with VMM. We implemented a feature that we call Performance and Resource Optimization, or PRO for short. So if we go into the VMM console, I get access to what we call PRO tips. And PRO tips is just advice that the system is giving me based on monitoring and watching what’s actually happening in my data center. As users access systems, as demands change, it constantly looks for ways to be more efficient, and to look for opportunities to properly allocate those resources, and again it does it an application-specific way. What you see here, I have a PRO tip for that order tracking system, and if you noticed there were some warnings in that diagram view, well, the reason is because it’s being slammed pretty hard, and it’s telling me that it’s exceeding the traffic levels, or exceeding what I provisioned for it. So it’s prompting me to add another IIS Server to that Web farm. I’m going to go ahead implement that PRO tip. Behind the scenes, VMM is automatically going to spin up a new IIS Server from one of the templates in my library, configure it, add it to my Web farm so that I can get back into a state of proper health.

BOB MUGLIA: So go into some detail about what these PRO tips really are, how are they built?

RAKESH MALHOTRA: It’s actually built as a direct extension from management packs that already exist today in Operations Manager. Those knowledge packs that we have lots of vendors already producing can easily be extended to implement PRO, and when I hit that implement button, PowerShell actually executes on the back end to actually perform all of the operations. It’s incredibly flexible, and allows you to take advantage of all the rich knowledge in Ops Manager, and the agility and customization in PowerShell.

BOB MUGLIA: So you can think of this as a first step in the integration of models, and the models that exist in the operations manager environment, and connecting that to the virtual infrastructure to allow automatic deployment of resources based on what’s actually happening within the application. It takes knowledge of the application to invoke those sorts of actions, and PRO tips are a way for you to set that up. Now we’ll take that further in the future, but this is an important first step, and we’re working with the OEMs once again, and our partners, to help implement these PRO tips as a very straightforward, PowerShell-based extension to management packs.

RAKESH MALHOTRA: Absolutely. And to demonstrate that point, we’re working with partners like HP to get hardware specific information, so you get the best possible knowledge before you make calibration decisions in your environment. In this case, it’s a PRO tip prompting me to migrate virtual machines outside of an HP Blade enclosure to get me back into compliance with a power threshold policy that I’ve set. So, again, a deep, rich application and hardware knowledge is all surfaced in PRO.

With that, let’s actually go back and take a look at our Dinner Now system, now that we’ve remediated it with PRO, and you’ll see here that now I’m into a state of health. I actually have three Web servers on the front end, a single app server in the middle, and a database on the backend. So I’ve dynamically provisioned the system. I’ve automatically implemented and added the new capacity, and you can actually run PRO in a fully automated mode as well, so it takes its own advice and auto implements tips, so you can totally run this in a hands-off environment as well, and it works on VMware and Hyper-V both.

BOB MUGLIA: And so this is an example of how important it is to think about management holistically. In Microsoft, we think about Systems Center as a complete set of tools that all work together to solve your problems, and virtualization is a piece of that. It’s a critical piece, but you need operations management, as well as config management to really solve it fully.

RAKESH MALHOTRA: Right. And the last thing I’ll say is, the beta is available as of today, so you can take this software home with you. Stop by our booth and give us your feedback.

BOB MUGLIA: Great. Thanks a lot Rakesh. (Applause.) Lots of great stuff. That’s a lot of improvement in one year, come a long way in that period of time since we’re here a year ago. So virtualization, a key step. What I would like to do now is show you how one company is taking advantage of virtualization to really improve the efficiency of their environment. Let’s roll the video on the Scooter Store.

(Video segment.)

So lots of great stuff to look at with virtualization, lot of companies working on this right now, it is definitely the time to get engaged. And, again, we’re trying to provide you a complete solution that works across your overall environments. So if you have VMware, we’ll provide a better management experience. And if you don’t have VMware in your environment, then Hyper-V is an incredible place to start and drive forward. And, of course, the Microsoft environment provides a much more comprehensive integrated solution at a fraction of the price of what VMware charges. So it’s a pretty good deal for you.

Let’s go and talk a little bit now about the next steps in the data center, and the evolution of the data center. The first thing I want to talk about is the evolution of the applications, and the way the applications need to move forward. In today’s world, you deploy hardware virtual images that contain both an operating system and the middleware and the set of applications connected to it, but that’s a pretty inefficient way to do things. You might have several thousand business applications in your environment, and it’s really a problem to have to manage several thousand operating system images. What you really want is to be able to separate those operating system images out, have a small number of them, say, 5, 10, 20 operating system images that bring together the combination of the underlying operating system and the middleware that you run, and then have your applications fully separated.

Now, with technologies like SoftGrid, that will be possible in the future, and overall what we are doing is working to move our server applications into a stateless environment so that they can be composited together at runtime so that the applications don’t need to go through an installation process on top of the operating system. They can simply be copied, it’s really a file copying onto it. SoftGrid technology can help, and in the future that’s where we’re going to help existing applications, but some roles like IIS, and SQL Server 2008, actually, are really just about there today in terms of their stateless state. So application virtualization is a key technology, a key enabler to allow for the creation of an efficient, dynamic data center. But, as I’ve said before, you need more than just virtualization to solve this problem, you need an understanding of the applications, the models, the relationship between the components of the applications and the servers.

In today’s world, apps deploy on many server. A typical minimum for most applications is three-five servers that’s crossed in a multi-tier application, and some apps take hundreds of servers to actually run fully. So a model brings those things together in a cohesive way, and allows the system to understand it. It does more than that, though, it actually is something that allows people, through the process of developing applications, through the lifecycle of building an application, to understand the components, and to enable the configuration of it in a consistent way. It really starts with the business analyst that define the requirements for the application. They’re the ones that understand what the business need is, and those requirements need to be written down. And then an architect needs to take those requirements and define the model that is associated with that application. The architectural model that will be used throughout its lifecycle. Development implements it to that model, it gets deployed inside the environment, and then it’s maintained and configured. And things like governance rules can be applied to ensure compliance. Having the consistency of a model is a very important thing, and we’ve been working on this for quite some time. We see some early manifestations of that appearing in products like Operations Manager today, Virtual Machine Manager, Configuration Manager, but over time you’ll see this become deeper and deeper baked into the Microsoft products across the entire lifecycle of the business apps.

Models are very real for you today. If you’re using Operations Manager 2007, you are building models as a part of the process of defining the structure of the application, and knowledge that exists for Operations Manager really takes the form of a model. Now that’s going to evolve quite a bit as we go forward in the future, so really those are the steps, those are the things that bring things together. And if you take a look at this, this is the evolution of the data center from where we are today, where applications and operating systems are deployed physically in a somewhat ad hoc way, to a world where they are configured to be deployed, images are deployed at the virtualization level, virtualization images are laid down on hard disks, and really just the hypervisor needs to be deployed. And then virtualized images are deployed on top of that, applications are composited in, and it’s all brought together with models.

You can sort of see that here. Think about when you deploy an app, all of those things come together, the model sort of defines the way the application should be run, and then it brings together the operating system image, and the application, and then deploys it or places it on one or more appropriate servers. That’s where we’re going in the future. You see some of it today with these PRO tips, you see us having realization of this. You see an integrated management environment across Configuration Manager, Ops Manager, and Virtual Machine Manager. You see models beginning to be used to help these things, and the knowledge inside Operations Manager being able to be used to deploy applications within the virtual environment. It’s really the beginning. We’re five years into a 10-year vision. I said it’s 10 years five years ago. We’re halfway into it. And we have five more good years of work to do to fully complete it, but it’s been a process and a journey, and very much a journey where we’ve learned a lot from you.

So one of the things we’ve learned in the process of this is that while we think about the Microsoft environment an awful lot, and we definitely work to make the Microsoft environment the best, most integrated environment for you to run your data centers, and run your business with, and we understand there’s a lot of benefit in bringing all those things together, we know that many of you, maybe most of you, have heterogeneous environments that you work in, and when you look at this picture, you take a look at this, you actually see in some senses a heterogeneous environment, and what is that piece? Well, that is  take look, that’s really Linux coming out there, we’re actually looking at managing the Linux environment now.

System Center Cross-Platform Extensions

So today, what we’re going to talk about a little bit is the breadth of how we’re expanding Systems Center, we’re taking Systems Center forward to manage not just the Microsoft environment, but also managing UNIX, Solaris, HP-UX, and AIX, as well as Linux. So there is a penguin in that slide, and this is a very big statement for us, to expand out and begin to do heterogeneous management. And today what we’re doing is announcing the Cross Platform Extensions for System Center Operations Manager. These are very real, they’re very real today, and with that what I’d like to do is invite Barry Shilmover up to come show us the Cross Platform Extensions for System Center. (Applause.)

Barry, good morning.

BARRY SHILMOVER: So when we came out and we decided to architect and design Cross Platform Extensions, there were several things we wanted to accomplish. First and foremost, we wanted the experience that you the IT professional has to stay the same. We wanted you to be able to use operations manager in the same way that you do today to manage your Windows and your Microsoft applications.

What we’ve done is we’ve actually extended the Discovery Wizard to allow you to discover those non-Windows environments. So I go through the Discovery Wizard, I can add a Discovery Scope, and I can choose the UNIX or Linux environments based on IP address, on the DNS name, or on the IP address range. Now, what I’m also able to do is put in the credentials that will be used to discover and to deploy the agent. I can apply a root password, or a root equivalent password, or I can just use a regular account, and then elevate that to root.

Now, to do this there are several things that we’re doing. One, we’re using industry standard protocols. So once I click on the discover button, what I put in my criteria, what the wizard will do is it will use WS Management, and SSH to connect to those UNIX and Linux environments, to discover them, to figure out what distribution it is, what architecture, what version, and return back the list of systems that we’ve discovered.

Once you as an administrator choose the systems you want to deploy to, we will then deploy using those same protocols, SSH, the actual agent technology that goes onto those UNIX and Linux environments.

And we wanted to make sure that both the UNIX and Linux community and the UNIX and Linux administrators in your environment were comfortable with the components that we’re going to put on that system. So what we’re actually doing is, the components are made up of open source projects, and we will give you guys the packages to deploy that will install on those systems, at which point we’ll discover them, validate them, and bring them into Operations Manager.

Once we do that, Operations Manager takes over. As I drill into one of these systems, and expand availability, you’ll see that I’ve got things like hardware and operating system availability. So I drill into hardware, there’s my network adapter, my processors, logical disks, as well as physical disks. Moving down into the operating system, I can monitor things like the cron service, or daemon, SSH, and syslog. And all this ties back into the existing operations manager infrastructure.

We’ve included things like dashboards for logical disk and network, physical disks. Performance, so you can look at the CPU utilization of your UNIX and Linux environments, side by side with your Windows environments. We’ve also, because this is just pure Operations Manager, we tie to all the other components, reporting. So I can actually run reports that ship with the management packs, and those management packs, or those reports will allow you to look at the availability of your UNIX and Linux environments in the same way that you would a Windows environment.

Now, what I wanted to do is quickly switch over, Bob, and explain exactly how we do this, to give you guys an idea of what the architecture looks like.

BOB MUGLIA: So let’s take a look at the architecture, then.

BARRY SHILMOVER: So as you look over on the left, the box on the top is your Operations Manager system, the one on the bottom is your UNIX or Linux environment. And there’s really three components that get placed down on the box, or on the system. If you look at those Ops Manager providers, so we’ve written providers that then interface with the operating system, and pull information like logical disk, syslog information, physical disk, processor, both health information, as well as performance information, and configuration information.

We then surface that through OpenPegasus, which is an open source provider  sorry, open source CIM MOM that exists in the world today that a lot of the distributions ship with. The third component that we use Openwsman, which is an open source project from Intel. From there we simply communicate with WS Management off the box. We consume it into operations manager, and we can give you all the different knowledge that I’ve just shown you.

BOB MUGLIA: So what we’ve done is we’ve taken the System Center Operations Manager infrastructure, and extended it to Linux, as well as HPUX, Solaris, and AIX, the broad set of heterogeneous UNIX environments, and used in that environment open source components to enable our partners to build high quality knowledge of those environments. This is very much a situation where Microsoft is providing the underlying infrastructure, to allow for broad management of heterogeneity, but we’ll very much continue to work with our partners that we will be able to supply knowledge that’s appropriate for the specific heterogeneous environment.


What I wanted to do is switch back to the demo, and show you some of these capabilities. And to do that I’m going to go back into the Dinner Now application. And it’s my job to watch, and to monitor the different systems, and different applications, distributed applications, end-to-end services, that exist in my environment. And there’s three that I’m going to look at.

This first one, Dinner Now, this is the one that Rakesh showed you. What Rakesh was not able to show you is that actually the data center, and what makes up Dinner Now, spans beyond SQL Server, Windows and IIS. As I start expanding this, you’ll see that I have other components, I have an Oracle database. This is being managed by the Quest Management Pack. We’ve been working closely with our partners to allow us to build these, to show these to you, to make these available to you.

There’s the SQL Server, and mySQL, through another partner solution called Xandros, a company called Xandros. As I expand that you’ll see I have the specific databases, and the table spaces that exist within that environment. So all of the models for those components get brought in through the management packs.

BOB MUGLIA: Again, really extending the model-based environment of Operations Manager now into a heterogeneous world, allowing our partners to really build up that knowledge.

BARRY SHILMOVER: Precisely. As I expand the Web servers, not only do I have IIS, I actually have an Apache Web server in here, as well. And from the server components, and these are the management packs that we’re going to ship with Cross Platform, we’re going to ship the core OS management packs. And as I expand this, you’ll see it’s changed a little bit from Rakesh’s view. Not only do I have the Windows environment, I have a Solaris system, I have a Novell SUSE box, and I have a Red Hat environment, as well.

BOB MUGLIA: I guess I can’t say that I recommend you running this many non-Windows system in your environment. But, if you do, we want to be able to provide you with a great cohesive management environment for that. (Applause.)

BARRY SHILMOVER: So that’s kind of the overall diagram, let me show you one of the applications. This is the external web app that people actually connect and place their orders. That’s the main site here. What I’ve done here is I’ve been able to go deeper than I could before, as I expand databases, I have the specific databases that make up this environment, rather than the database engines. I have Perspective, which tells me whether the Web site is up and running, whether the customer experience is what I expect it to be, the specific servers, services, such as the Oracle listener in this case, and then the Web site itself.

Rakesh has added some capacity, so I want to switch over to the environment, and make sure it’s still functioning, so my users can still go in, and still place their orders. So I’m just going to launch the Dinner Now Web site. We’ll place an order here. So I’ll just put in the zip code for Vegas, and it’s still early in the morning, Bob, so I’m just going to order some coffee here. Fourth Coffee I know is a local coffeehouse that can deliver to me, and I’ll order a couple of cappuccinos and go ahead and check out. At this point I’m going to submit some card information. Bob, I hope you don’t mind, but we’re going to use your credit card for this.

BOB MUGLIA: It’s a Microsoft card, it will be fine.

BARRY SHILMOVER: We’ll choose the address, the credit card that’s on file, and then we’ll place that order. So it looks like, for the customer perspective, they can come in and they can place an order, but there’s another customer that we deal with, and that’s the restaurant. So let’s take a look at their experience, and make sure that their experience is what we’re expecting it to be. I’ll launch the kiosk from Ops Manager. I can see the order that we just placed, and drill into it, and I can see that it’s on route. So they’re actually delivering this to us.

So that’s kind of the customer experience, as well as our business partner experience. Now, we have some internal applications that we monitor within Dinner Now, and that’s the accounting program. As I switch into it I can already see that there’s a problem, before I drill into it let me show you a couple of things. Under services I’m actually monitoring the PHP components, again, through the Xandros Management Pack, things like the databases and so on, and so forth.

If I actually go into the accounting, I’ll log in, put in my password. I have to make sure we’re secure, Bob.

BOB MUGLIA: It’s a long one.

BARRY SHILMOVER: And I can see that I’ve got a problem. So the database is currently offline for some reason. So let me switch back to operations manager, and I’m going to use problem path to actually go right down to what the problem is. I can see that  let me just zoom in so you can see it a little bit better. I can see that the problem is in a logical disk called Op Dinner Now, and I know that this is where the database actually resides. Let me see what Cross Platform Extensions can do for me.

I’ll go and I’ll look at the Health Explorer, it will tell me that that logical disk is currently offline. From here I can look at the disk health, so I have embedded views, and what not. It gives me the alert information, as well as the knowledge. But, let me actually look at the context. One of the things the Cross Platform Extension does is it returns the context of what the failure actually  some information about the failure, so you know what happened when you try to solve the problem. I can see that Op Dinner Now is currently not online.

So normally what I would do here is I would actually step out of the tool, go to another tool, or contact the UNIX administrator. SSH into the system, and try to remember which commands to execute to actually figure out what the problem is. What we’ve done is we’ve built that into the knowledge, and into the management packs that come with Cross Platform. So we have diagnostics.

As I scroll down, there’s a diagnostic that returns a command output for a command called DF-K. And what this does is it actually goes and executes the command for me securely, and returns that information. This command tells me that if I look through the list that Op Dinner Now is not currently mounted. So at this point I know that I have to go and remount that drive. Again, I’d have to leave the tool, go to SSH, figure out what system it is, log in, and perform that action. But, again, we’ve simplified things for you, we’ve made it a lot easier.

We’ve built those recoveries right into the management packs. Now, in this case it runs statically, meaning that I have to kick it off, but with the dynamic data center, I can actually configure these, so that the execute automatically if there’s a problem. All I have to do is click on the disk health mount, say yes, what the Cross Platform Extensions is now doing is it’s using WS Management to connect securely to that system, in the right context, using the correct credentials, that are stored securely within operations manager, and executing the correct command to bring that system back up.

In a second here what’s going to happen is you see the screen refresh, which tells me that this actually occurred. As I scroll down you can see that the recovery output was true, it was successful. In a few second here my view will go back to a healthy green state. In the meantime I’m just going to refresh the view here. There’s the order that we just placed, Bob, and the cappuccino is on its way.

BOB MUGLIA: And one of the things we’ve done by working with open standards, and by building on existing tools and infrastructure that exists in the Linux and UNIX environment, it really makes it simple to create these management packs in a heterogeneous way. And it’s a very mature environment for you. These tools are already existing in the heterogeneous environments, so building on that is very straightforward for our partners to take an extend beyond what we do in Operations Manager with Microsoft and Windows, and bring that to the heterogeneous world.

BARRY SHILMOVER: So as you can see in my view, everything has gone back to healthy. I was able to fix an issue using native components, that are in the Cross Platform Extensions, on a UNIX box, without ever having to go to that UNIX box right from operations manager.

BOB MUGLIA: That’s great. Thank you very much. (Applause.)

Okay. So we’re talking about this, when is this going to come? Well, the beta is available today of the Cross Platform Extensions. And as you walk out the door you can get a copy of it, we’ll have people handing out beta copies of the Cross Platform Extension. So if you work in a heterogeneous environment, and would like to extend what you’re doing with System Center to that, you can do that.

We also have a session later this week. If you look in your program there’s a session that’s cleverly titled Cross Platform Monitoring with System Center Operations Manager. We kind of didn’t want to give away before today that we’re doing these cross platform heterogeneous extensions. So, in fact, that session is really going to be about System Center Operations Manager Cross Platform Extensions. It’s Thursday at 10:15 a.m., in the Room 2401-B. So if you’re interested in that, that session will be the place to learn a lot more.

Interoperability and Heterogeniety Standards

So it’s a pretty big step forward to extend into this heterogeneous world. And thinking broadly like that is very much one of the things that we’re trying to do, once again, in terms of listening to your feedback. And you told us that Windows is great, and Windows might be a large part of your data center, but for many of you, you have heterogeneity out there. So we’re taking a step towards allowing you to have one single solution with System Center, to manage that entire heterogeneous environment.

Now, overall this is really part of a broad strategy we have to ensure that the products that we build interoperate in the environment you have. You have data in Microsoft products and tools, you have data in Windows, Linux, and other systems, you need to have those interoperate very well. Management is a key fabric that helps to enable this.

Talking about these interoperability extensions, for several years we’ve been working with partners on connectors for Operations Manager to connect to heterogeneous management systems, such as Tivoli and OpenView. And one of the things we’re doing today, as well, is making available a new set of connectors that are based on this same architecture that allows operations manager to very effectively connect to Tivoli and OpenView systems, if you have those in your management environment. So while we will provide over time a very complete operational management environment for heterogeneous systems. We know, once again, that you probably have other systems in there, and we want to enable you to connect.

Standards are critical. Microsoft is helping to drive many, many standards efforts forward, and as a part of our management effort there’s a number of them that we’re working on, two of them that are very interesting are our Openwsman and OpenPegasus, and we’re very pleased to be announcing today that in addition to just working with OpenPegasus, we’re joining the steering committee of the OpenPegasus standards body, so we’re helping to drive that important standard forward, and working very cooperatively with the open source community in building these tools and technologies based on OpenPegasus. (Applause.) It’s not your grandfather’s Microsoft, really, so we’re changing our approach. And, once again, I think it’s learning from you, and what you want, and recognizing that interoperability is a key thing.

Heterogeneous virtual machine management, we’re the first in the industry to have this. We think it’s very important. We know if you’ve deployed virtualization in production, you probably have a VMware environment, and we want to be able to co-exist very effectively with that, and provide you with a much better, much more integrated management experience for both your ESX environment as ell as your Hyper-V environment, and of course allow you to drive forward your virtualization with that integrated management solution, and at a small fraction of the cost that you’ve been paying right now. So another key thing.

A big part of this is partners. Microsoft can provide the platform. We can provide the underlying infrastructure to enable these things to happen. But it only happens by working across the industry. It really is a very broad partner community that makes all of this happen, and so working with partners in heterogeneity, working with partners on the Windows environment, is very key to what we’re doing. And now this work that we’ve announced today with heterogeneous Cross Platform really provides a much broader opportunity for our partners. People ask the question, well, what do you heterogeneous partners like Qwest think about this, and we’ve been working with them for a long time on this, and by us providing the underlying information, it allows our partners to focus on building incredible value in the knowledge that exists in these heterogeneous environments, so it opens up a broad new set of opportunities for these partners.

As I say, this is very real. We’ve been working on it for a long time. In addition to giving you the beta, you can go out into the show floor and take a look at quite a few partners which will be demonstrating these tools working in a heterogeneous way. They’ve already had those things in place, and they’ll talk about the availability of their various management packs.

So overall, it’s a great day, it’s a great week, it’s a time for change. It’s a time for you to go out and learn about the broad set of management services that we make available in connection with the overall ecosystem, and our partners. All of this really happens because we want to listen to you. We know that you are the heroes within your companies that make and drive your business forward, and we know management is an incredibly important thing. We are committed to being the core partner you have in providing the management infrastructure in your company. There is a lot to see, a lot to do, a lot more great stuff to come. It will be a great week. Thank you very much, have a good MS.


Related Posts