Bob Muglia: WinHEC 2006

Remarks by Bob Muglia, Senior Vice President, Server and Tools Business, Microsoft Corp.
WinHEC 2006
Seattle, Washington
May 23, 2006

BOB MUGLIA: Good morning. You’ve heard from Bill and from Will all the great things that are happening in so many different parts of the company and the opportunity that exists for you. In the server space we see great things, as well. There is an unstoppable trend toward industry-standardized computers. And with that unstoppable trend, we see Windows, Windows Server really driving to success in the future. As we do that, it’s really an industry effort to bring forward that success, and help business customers solve their problems.

So why is Windows winning? Why are we seeing all these computers moving to Windows? Underneath the real key is understanding the needs of business customers, and what they need in their server space, how can we build products that really together as an ecosystem meet those needs in a way that’s not matched by the competition. The center of that is really understanding the people within a business, understanding how to provide unique benefits to those people to create business advantage for our customers.

We’re investing very heavily in this, we’re thinking about all the different workloads, all the different ways that our business customers use our products, together creating all these opportunities for the ecosystem. We just launched a major campaign we call People Ready, which is a critical focus on helping business customers. And we think about the IT space, and how IT makes decisions about the problems they fact.

There are a set of things that we are focused on together with the industry. We see solutions to problems that have been faced by the IT community, as they build out business solutions that have been there for a long time. And there are tremendous opportunities for the industry as a whole to go off and solve. How can we work together to provide solutions that make people more effective? How can we help teams work more effectively together, whether that be in Web conferences, or making meetings more effective, or in team collaboration?

How can we improve the management environment, to cut the cost for IT, so that, as Bill said, those IT dollars can be spent, and invested in solutions that drive new opportunities in the hardware business, as well as new software and business application opportunities? How can we provide security for the marketplace? I heard today on the radio, on NPR coming in, once again another information theft problem that companies face. How can we help companies to prevent these sorts of problems in the future? And at the center of all this, of course, how can we help companies build business solutions in a more effective way to allow them to create business advantage, that’s unique to them, and differentiates them from their competitors?

Well, the Windows Server product line, together with the entire ecosystem, the hardware innovation that all of you drive, the work that’s done in the solutions space to build solutions that are targeted to business, the ISVs that create custom and packaged solutions, all these work together to create a virtuous cycle that are solving business problems.

As we look forward, as I said, there’s a trend towards industry standardization, it’s unstoppable, and Windows is leading in that space. This is the latest set of data from IDC showing the incredible growth of the server industry. It’s all being driven, that growth is being driven by the industry standard space, and Windows is capturing, Windows Server is capturing the lion’s share of that growth.

We continue to see double digit growth in the server space in the next few years at least, and with that comes tremendous opportunity, hardware opportunity, peripheral opportunity, opportunity to sell solutions into business. Again, Windows is at the center of that.

Where are some of the places where Microsoft is  what are the places that we see as critical opportunities, that we’re making big investments, that thus provide some of these opportunities for you to capitalize on? Well, in an hour I can’t cover them all by any means, but what I wanted to do was focus in on three very important sort of general areas that we’re putting some substantive focus, and then talk a little bit about the business opportunities that exist for you.

So those opportunities are really the server marketplace, all up, and the different kinds of workloads that are new and emerging workloads that are opportunities in the server, that’s one. Storage, and the opportunity around storage, and attaching storage devices, both high end storage systems and lower cost storage systems to servers across the entire business ecosystem, that’s two. Then management, and the opportunities around in particular virtualization and management to drive down some of these costs overall.

Let’s start with the server all up, the right server for the right job. Well, there’s a number of different workloads that I would love to focus on, and I’m only going to focus on a couple here, but there are some very important ones that have tremendous opportunity for the hardware marketplace. The first one that we just see phenomenal, phenomenal opportunity in is the small business market.

By and large, small businesses are underserved by technology today, that’s true in the United States, that’s true in Europe, and it’s true all around the world, in Asia, and in the developing countries especially. Now, what we’re seeing is, if you go back three to five years, many small businesses didn’t even have a PC, and those that did would typically only have one. Now we’re seeing more and more small businesses have multiple PCs all connected together to the Internet. But, most of those companies don’t have servers yet, and when they bring a server into their environment they get a lot of benefit in terms of IT administration.

Now, what we have done is we’ve created this product called Small Business Server. It is a unique product in the market, there is nothing like this product in the marketplace, and it’s driving a whole ecosystem. It’s driving a hardware ecosystem that provides an opportunity to sell new server devices into businesses that would never buy them in the past. It’s driving a VAD marketplace, and an ISV marketplace, that’s focused on building, and installing those solutions for customers. It’s designed to make that as easy as possible, and we’ve seen tremendous growth, and tremendous success with Small Business Server since we launched the 2003 version several years ago.

Now, this summer we’re going to launch a new, updated version of that, that we call 2003 R2 Small Business Server, and it brings together some new, important benefits, the most important of which is integration with Windows Update to allow the server itself, and all of the device, all the PCs attached to the device, to the server, to be upgraded with new patches, and have new service packs installed. That’s true for both Windows, as well as for other Microsoft applications. So it’s a huge step forward in terms of the maintainability of these systems. This is a big market opportunity. There’s on the order of 20 million small businesses that are underserved in this space, and it’s a great chance to go out and drive new hardware business into that important marketplace.

The second area that I’d like to focus on is high-performance computing, this is a huge growth area in the overall server market. Next year we expect, IDC has estimated that on the order of 700,000 servers will be sold for high-performance computing. This is a very big segment of the market. It has been a segment that has been dominated by Linux over the years.

Now, we’ve seen a move in the past few years away from proprietary high-end, high performance servers, special purpose servers, to standardized servers being clustered together, but Linux has really fueled that. So over the last couple of years we’ve worked with the hardware industry, we’ve worked with the software industry, and we’ve worked with the companies and the academic institutions that really are interested in buying these high-performance computing solutions. And we’ve built the Windows cluster servers, the Compute Cluster Server.

The marketplace we’re predominantly targeting here is the commercial marketplace. And while this product will scale to hundreds of nodes, we think the sweet spot, and certainly the most profitable part of the marketplace for high-performance computing is in the 8- to 64-node range. All the feedback we have says that that is a tremendous growth opportunity, particularly in industries like oil and gas, and pharmaceuticals, and financial segments that really require this computing power.

There’s a tremendous opportunity to bring to market very profitable solutions in that space. The hard part, from a hardware industry perspective, is these really big marquis, 500,000-node HPC installations tend to not be that profitable to deploy, but the smaller ones, where there’s tremendous growth in the years ahead have substantive revenue opportunity for all involved.

One thing that’s been interesting as we moved to introduce this product has been the reaction of customers and the software and hardware ecosystem. Customers are thrilled to have Windows in this environment, because they understand Windows. They know how to manage that, and these Linux-based solutions have been difficult for them to integrate into their IT environment. Likewise, we’ve seen similar reaction from the software industry that builds customized applications for high-performance computing. They’ve been faced with a very strong divergence of different hardware and software combinations that they’ve needed to test. Windows Compute Cluster Server provides a standardized platform for them to build applications on, and we’ve been able to rally virtually the entire software industry to build solutions based on Windows Compute Cluster Server.

We’re introducing this product later this summer. It’s going to be happening very, very soon. And so it will absolutely be available as we go into the fall selling season, and we think it’s a very big opportunity for the industry as a whole, certainly the hardware industry, but also the software industry, as well. This is a big market opportunity, a lot of growth ahead in high-performance computing, fueled by Windows.

Now, one thing that’s important for high-performance computing is very high-performance networking, that’s an important thing, but there’s a lot of really important reasons to have high-performance networking. We’re seeing this in the storage space, as there’s a trend towards I-SCSI, and the need to have very, very scalable, very high performance DMA access into memory, where the mainline processor is offloaded, is quite important.

Now, the industry has been moving over the last few years towards tow-based solutions. I’m pleased to say that we’re announcing today the availability of the scalable networking pack for Windows Server 2003, which provides exactly that set of capabilities on the standardized Windows Server operating system. So it’s now possible to build solutions that incorporate this scalable, offloaded-based network processing.

We’ve seen a tremendous amount of support by the industry as a whole around this scalable networking pack. Companies from the mainline networking vendors, together with all of the OEMs, and hardware vendors have rallied to the potential that this provides. And Windows has really led in this market, we actually are getting a solution out at the beginning of this  leading Linux and other operating systems in providing fully functional, fully tested, OEM-built units that incorporate all of this great hardware technology in a proven software solution.

The results are pretty dramatic as to what you can get with this scalable networking. What I’d like to do now is invite Ian Hameroff up to show you a quick demonstration of some of the performance benefits that we get with the scalable networking pack.

Good morning.

IAN HAMEROFF: Thanks, Bob.

Good morning. We’ve created two demos to help illustrate the benefits that the scalable networking pack will provide Windows Server 2003 customers. For the first demo I will use a network benchmarking tool to demonstrate one of the scalable networking pack’s key benefits, reducing the CPU overhead related to network packet processing.

For this scenario we have four Intel-based, Dell Power Edge servers, with Broadcom Net Extreme II gigabit network adapters.

Although both these servers have the same hardware configuration, I’ve only enabled the scalable networking pack on one of them, this screen with the green background. Now, with performance monitor configured, to show CPU utilization, I’m going to go ahead and launch the network benchmarking tool on each server, and let’s see how this impacts it.

As you’ll immediately notice, the server with the scalable networking pack, by offloading connections to the network adapter is able to handle the same traffic load at more than half the CPU utilizations of the other server. So, Bob, let’s see how this looks when we put this on a Windows Server Workload. For this second demo we now have two AMD-based, HP servers, running the Windows Media Services workload. Once more, both these servers have the same hardware configuration, with the scalable networking pack only enabled on the screen with the green background. Now, before I came on stage I launched a traffic simulator to simulate 750 concurrent Windows Media Player sessions to each of these servers, streaming a 1.5 meg video file.

BOB MUGLIA: So it’s a very demanding video streaming server.

IAN HAMEROFF: In fact, it’s high definition quality video. We already have 750 clients on there. Now, with performance monitor configured to show both CPU utilization and a number of streaming client connections, I’m going to go ahead and double the number of connections to each of these servers. So what we’re seeing here is that the server without the scalable networking pack, the one with ht red background, has peaked at 100 percent CPU utilization before all the network connections could be established. Now, to raise the stakes even further, I’m going to go ahead and launch another 250 client connections to each of these servers.

BOB MUGLIA: That’s really  that one’s already been at 100 percent.

IAN HAMEROFF: Once more, the server with the scalable networking pack, the one with the green background is able to support more connections, at a lower impact to the CPU, thanks again to the ability to offload that network processing to the network adapter.

So let’s take a look at how this impacts the client experience. I’m going to go over here, and we’re going to launch a new network connection to  excuse me, a new Windows Media connection to each of these servers. Now, as you see the server with the scalable networking pack is able to immediately provide a high bit rate, smooth video experience, while the server without the scalable networking pack will have trouble even connecting to the media server.

So to summarize, I’ve shown you two demos which help demonstrate the cost-effective scalability, performance and network throughput gains that the scalable networking pack can deliver. Depending on the server workload this could translate into a 20 percent, up to 100 percent reduction in the CPU overhead related to network packet processing, with up to a 40 percent increase in throughput. So to learn more about how the scalable networking pack can help you do more with less, and to download the bits, visit


IAN HAMEROFF: Thanks, Bob. (Applause.)

BOB MUGLIA: As I said earlier, the scalable networking pack is available now, and we expect, given the incredible innovation that the hardware industry is providing with these offload-based chips, that this will become a standardized feature on servers over the next 18 months or so, and over time it will become very important on clients, as well, this important innovation across the board that the hardware industry is delivering.

Now, I want to talk a little bit about Windows Server and some of the amazing benefits that are coming in Longhorn. I could spend over two hours talking about the new features in Longhorn, but I wanted to give you some of the highlights of the things that we’re delivering with Longhorn Server.

The next major release of Windows Server, we just had Bill announce earlier today, we just went to beta 2 at the same time as Windows Vista. We made some major investments in the kernel, and updating the infrastructure to take advantage of all the new hardware capabilities that are important, particularly things like serial ATA that have become so common on servers. We’ve put in functionality to provide hardening at the security level, the firewall, making it easy to configure the firewall, which I’ll show in a minute or two.

The underlying server is componentized, so you can build an image, which has just the capabilities that an enterprise requires, thus reducing the surface area of that server considerably. Great features like dynamic partitioning, Bill in a sense showed that today with the virtualization demo that was done earlier, and we have great new capabilities, such as the scalable networking pack, as well as new generation TCP/IP stack that provides much better throughput for high bandwidth situations.

In the operations infrastructure side, there’s a ton of new technology here. A new version of Terminal Server that provides application remoting, and a proxy gateway for Internet access to Terminal Services, network access protection is a built-in feature. Windows virtualization, we talked about that earlier in Bill’s keynote, about the capabilities of the next generation Windows virtualization and Windows hypervisor. Application platform, huge improvements in IIS, allowing that to become componentized, as well, allowing ISPs to configure their environment exactly the way they want it it’s a very, very high performance implementation of IIS. New capabilities like Windows Workflow Foundation, and Windows Communication Foundation, to enable new generation business applications and Web service applications to easily be produced.

Lots and lots of great new capabilities in Windows Server Longhorn, and this is just scratching the surface. Beta 2 is available now, something that you can take home with you and begin to look at. We feel really good about the stability of the products, and the kinds of problems that it’s going to solve for customers as it goes to market late next year.

What I’d like to do now is invite Dan Harman up to give you a demonstration of one of the really interesting new capabilities that simplify the management of Windows Server out of the box, called Server Manager.

DAN HARMAN: Thanks, Bob.

BOB MUGLIA: Good morning.

DAN HARMAN: Good morning.

I’m very excited to be here today to show Server Manager, a collection of new tools we’re shipping in Longhorn Server that will help IT administrators very seamlessly install, configure, and manage their servers. One of the first things you’ll see after installing Longhorn Server is a page we call Initial Configuration Tasks. This tool is designed to walk administrators through a step-by-step process for completing set up, and getting newly installed servers initially configured.

With Initial Configuration Tasks I can do things like set the administrator password, configure the network settings, join a domain, and enable Windows Update. Then once I have the server initially configured, I can customize the server to provide the specific set of functionality I need in my environment, using the Customize This Server section. Here we have a link to add roles, which launches the Add Roles wizard.

If you’re familiar with some of the tools we shipped in previous releases of Windows Server, you’ll know we had several different ways you can install and manage components on the system.

BOB MUGLIA: It was a very confusing thing, sometimes you had to go to the Control Panel to do things, sometimes you could do things through Manage and Configure Your Server. It was a mess.

DAN HARMAN: Exactly, you had the Configure Your Server wizard, or the Manage Your Server tool, or if you didn’t like those you could go into Control Panel, and launch Add/Remove Programs to install system components directly.

In Longhorn Server these are all being replaced by a single wizard experience for installing and configuring roles. Using the Add Roles wizard you can select the set of roles that you want on the server. So for example, I can select both DNS and DHCP, and in a single pass through the wizard install and configure those roles.

Another way that we’re improving the deployment experience is by handling dependencies between roles in a more seamless, integrated fashion. For example, if I select Windows SharePoint Services I’m immediately informed that SharePoint requires other roles, such as Web server role. When I select add required role services, all these dependencies will be installed, along with Windows SharePoint Services.

As you can see in the navigation pane, panes are dynamically added to the wizard, that allow me to customize the role according to my specific environment. This helps administrators correctly configure their servers, in a way that is prescriptive, and according to Microsoft best practices. Not only that, but no longer do you have to run the Security Configuration wizard separately. The Add Roles wizard installs roles so they’re secure by default, out of the box.

BOB MUGLIA: That’s a pretty big deal. In Windows Server 2003 we had the Security Configuration wizard, but it was really complicated to set up and make sure that the roles were always secure, and the servers were secure by default. Now, with the configuration capabilities in Longhorn, it will just be by default that all roles are always configured with just the ports open that are required to satisfy the purposes that that role has.

DAN HARMAN: Exactly. Finally, I can confirm my selection, and install the roles. Now that you’ve seen how to install and configure roles in Longhorn Server I’m going to show you another tool we’re building to streamline ongoing management tasks. Server Manager is a new MMC console that is a one-stop-shop for local management of Longhorn Server computers. From the Server Manager home page I can get an overview of how the server is configured, what roles are installed, and what their status is. In this case, I can see that something is wrong with the print server role. I can click print server in the list of roles, and jump directly to the print server home page to drill into the problem.

Here I see that one of the services for the role is not running, and as a result the role is not functional. But, instead of launching some separate tool, or going to the command line, I can actually click on the service, and start it back up right from the home page, and the role is functional again.

BOB MUGLIA: So it’s a one-stop-shop for doing very common tasks. There’s still the management console, in fact, we’re making that a lot easier for administrators to use, but this provides a place for administrators to go, a centralized place, to see the status of what’s going on in their computer, and do basic configuration that they do on a frequent basis.

DAN HARMAN: Exactly. This is Server Manager, I hope you are just as excited as I am about this new technology. It’s available now as part of Longhorn Server beta 2, you’ll be able to try it out with your new beta 2 DVD from WinHEC.


DAN HARMAN: Thanks, Bob. (Applause.)

BOB MUGLIA: I said that Windows Server is winning in the marketplace, and it’s winning because we’re listening to customers and making it easy to configure things, like Server Manager is a great example of that.

Another area that’s really important is running commands, and Windows Server has never been particularly strong in this area. This is an area where UNIX and Linux have really always led Windows Server, it’s a generalized scripting command and generalized scripting command languages. What we’ve done over the last couple of years is, we’ve focused on providing a whole new wave of innovation in the command shell environment, and we’ve introduced recently Windows Power Shell, which is a command environment that takes and brings together the best of the scripting environments that have existed in the UNIX space for so many years together with object capability, and integration into the graphics environment.

With that, what I would like to do is introduce Jeffrey Snover up to do a demonstration of Power Shell.

Jeff, good morning.


Windows Power Shell is our next generation command line shell, that’s interactive, scriptable, and allows you to build admin GUIs layered on top of command line interfaces. I’ll start off by showing you an admin GUI layered on top of Power Shell. This is a prototype of a Windows WMI Explorer. I click here to see all the WMI namespaces. I can click into the CNV2 namespace to see all the classes, come down here, pick the PNP entities to see all the instances, select an instance, and see the properties.

BOB MUGLIA: What this says is that Power Shell just natively out of the box will know how to access WMI-based information that you’ve all instrumented inside the hardware devices you’re creating.

JEFFREY SNOVER: Exactly. And that was layered on top of a command line. Now let me show you the command line equivalent. Here, I’m showing all the drives on the system. Notice we have some new drives. This is the management drive. The management drive is WMI. I can /cd into the management drive, and when I do a /dir I’m, again, seeing the WMI namespaces. I can /cd into CNV2, and now when I do a /dir I see all those classes, right, but I don’t want all those classes. I just want the PNP classes. So, how do you do that in a file system? Wildcards. And you get it, you have wildcards. So, now I’m looking at just PNP devices. I can /cd into those, and now I can see the instances of those ACPI devices.

Now, once we’ve found those PNP devices, we can leverage Power Shell commands to do amazing things with them. Here, we’ll get all the devices and show them as table. Power Shell commands are different than traditional commands in that they emit objects, not text. And the benefit of emitting objects is that you can do all sorts of powerful things without having to resort to text-based parsing.

For instance, imagine I wanted to take these entities and group them by status. It’s as simple as this. Now we see them all grouped by status. Or, I can export them to CSV, CSV is great because then you can use all sorts of great tools like Access or Excel to see and manipulate that data.

BOB MUGLIA: That’s more of the power of the fact that Power Shell is built on the .NET infrastructure, the .NET framework, and so it natively is based in an object, it’s why we are able to go and in our next generation of management consoles build all of the business logic, the administrative logic associated with configuration, into Power Shell commands, and just provide a GUI environment on top of that.

JEFFREY SNOVER: Exactly. Power Shell also accelerates the adoption of new technology. The DNTF has recently standardized a common diagnostic model, CDM. Now, let’s jump over to an HP machine which has been instrumented with CMD diagnostics. Here I’m going to show a set of custom Power Shell commandlets that were written with HP to exploit those CDM diagnostics. First, we’ll get all the CDM devices, these are all the devices that have diagnostics on this machine. I’ll then drill into the video controller, get the details on it, and notice it has a set of diagnostics associated with it. I can then invoke one of those diagnostics. This will run in the background, and we can check on the results later. These diagnostics were written by ATI and delivered with their component. This allows the entire ecosystem from manufacturing, to field support, to admin, to management products such as MOM to be able to invoke these diagnostics in a common way. Power Shell makes your investment in CDM a good deal by delivering a great end user experience.

BOB MUGLIA: So, while we have some capabilities to manage hardware through WMI out of the box, there’s a tremendous opportunity for you, as you build your devices, to provide Power Shell commandlets that are instrumented and designed specifically to expose those capabilities to administrators in easy to use scripting form.

JEFFREY SNOVER: Exactly. Now, with Power Shell, it’s not an either/or world. We don’t have to choose between the CLI and the GUI. Look, it’s just a statement of fact, and in the past we’ve been weak on command line support. Power Shell fixes that, and now you can do amazing things. This gets all the processes, you can sort them by handles, notice no text parsing, and then get a list of those showing the names and the handles. This is powerful stuff that you can do directly from the console.

But the fact of the matter is, information like this is best seen visually. So, let’s do the same thing, but this time let’s output it to a chart. There you have it, Windows Power Shell, the best of both the command line and the GUIs world. Power Shell is our next generation command line shell that’s interactive and scriptable, and delivers you more instrumentation to use in an easy way. You should write custom Power Shell commandlets to differentiate your products and layer your admin GUIs on top of command line. Thank you.

BOB MUGLIA: Thanks, Jeff. (Applause.)

So, that next generation command technology is shipping late this year. We are building several new products, such as Exchange 2007 with Power Shell deeply integrated into it, and as Exchange ships in the fourth quarter of this year, roughly that time frame Power Shell will be available on both 2003 and when Longhorn is available on Longhorn. So, it’s something you can take advantage of in the very near term.

Let me now talk a little bit about storage. I think there’s a tremendous opportunity for the industry around storage. We’ve seen tremendous adoption and growth of storage over the last few years, and one of the things that’s interesting is watching how Windows Server has taken off and more and more storage is attached to Windows Server.

As you can see from the chart, this is all IDC data, dramatically more information is stored and attached to Windows Server, either through SANs or directly attached than any of the other operating systems on the planet. And as we look forward over the next few years, we know that there is going to be continued growth to network attached storage, and while Fibre Channel has a very substantive lead in that because it was in the market for quite a few years, the growth of ISCSI which is predicted at over 200 percent annually, it will catch up, and over time eventually surpass Fibre Channel attached storage.

So, this is a super good opportunity for the industry as a whole. And Microsoft has been making some substantive investments in this space, we have had an initiator as a part of Windows Server 2003 since it shipped, and Windows Server has worked very, very well in the ISCSI space enabling disks to be attached across the network through ISCSI.

There are two things that have happened over the last few months that are really imp, though. One is we’ve worked cooperative with the industry on a diskless boot approach to enable Windows Server, particularly interesting for things like blades, but really interesting for any kind of a server that’s attached in a data center environment that doesn’t have a native hard disk, an attached hard disk, to boot remotely off of an ISCSI partition. And then we just recently announced an acquisition that enables us to provide as a part of our storage server an ISCSI target to enable a NAP device to actually also function as a SAN as well in the ISCSI world.

So that convergence, those two things, are really additive to the ISCSI ecosystem, and we think there’s a tremendous opportunity for the hardware industry to take advantage of those things. There’s a lot of profit to be made in storage still, and there’s a lot of opportunity to build solutions around Windows in that space.

Now, let me finish up the talk and close on a discussion of management, and some of the innovations that are happening in the management space. In some ways, the Power Shell demo was very much a management focused thing, because it really enables administrators to provide remote scripted services against their many machines that they have. We’re making many, many different kinds of investments in management, and today what I really want to focus on is the importance of virtualization, and the impact that virtualization will have on the server marketplace.

Now, as we sort of look forward over the next couple of years, we expect to see continued explosive growth in the use of virtualization on servers. In the long-term, we expect virtualization to be standard on all servers shipped and, in fact, eventually become the default mode, that is that all workloads will run in a virtualized environment. I don’t expect that to happen for at least three to four years, but I do expect that over the next few years, more and more customers will begin to utilize virtualization for a very large percentage of the work that they do.

Again, we see tremendous growth in virtualization. IDC estimates show it growing to over 1 million servers by 2009 that will be virtualized. And an interesting phenomena here is that as these servers move towards virtualization, the overall cost per unit tends to go up. Virtualization provides an opportunity for higher utilization on servers, and the customers tend to buy more fully featured and fully equipped servers that are virtualized, so the revenue opportunity for the hardware industry is substantive as data centers, and IT begin to build more and more of their solutions in a virtualized environment.

We’ve made many investments in virtualization. Bill showed some of those this morning, and I’ll show a few more of them in the next few minutes, but one of the most important things we’ve done here is really focused on creating a business model, and a licensing model that works around virtualization. We introduced all of that last fall, but the thing to realize about Windows Server is, it is ready today to run in a virtualized world.

And when we think about virtualization, we are thinking about this in a very ubiquitous way across everything that we do in servers, and we’ve now come to realize that in some sense, there are really three levels of virtualization that Microsoft has investments in, and the industry as a whole has an opportunity to invest in. The one that tends to be familiar to people is really hardware virtualization, where you take a single piece of hardware, and you virtualize that environment to enable multiple copies of the operating system to run, be they Windows, Linux, or other operating systems on top of that hardware. So, that’s hardware virtualization, and that’s really what Bill showed you earlier today.

There are two other layers of virtualization, however, that are interesting. The second layer of virtualization is OS services, OS system services virtualization. And this enables you to run multiple instances of operating system services within a single operating system, thus providing very, very high scalability and some level of isolation.

In some senses, as you think about these three different levels of virtualization, you can think about hardware virtualization as providing the greatest isolation with the least granularity, i.e., the granularity of an entire operating system; OS services virtualization is sort of in the middle, it provides better granularity, but less isolation than hardware virtualization, and it’s particularly interesting for scenarios like hosters that want to run thousands and thousands of different customers on a single server. You can’t do that with hardware virtualization, you can’t get that, the overhead is too high, but with OS services virtualization you can. We’re making some long-term investments in this space. It will be a few years before you see these things to come to market, but we are investing in this space, and we do think it’s very important.

The third area of virtualization is application virtualization, which provides the best granularity, the smallest granularity, but the least isolation of these three types of virtualization. And what’s interesting about application virtualization is that it provides a very simplified environment for installing applications, and enabling applications to run side-by-side in an environment that might not otherwise run in a compatible way. So, it dramatically reduces the cost to IT, and it’s quite applicable to both the desktop and the server environments.

So, three types of virtualization, three different investments, three all very important over the next few years as IT uses virtualization to reduce their cost.

Now, Windows Server virtualization, Bill already talked about this today with the advent of Windows Server virtualization, and the new hypervisor, and the demonstration that the showed. That hypervisor is obviously running today. We’re making incredible progress in terms of the development of that. We expect it to be in beta by the end of this calendar year, and available within 180, within six months or so after Windows Server Longhorn ships. This is a super important technology. I was very excited to see that demonstration. I’m really an engineer at heart, and so I really enjoy demonstrations like that. I have to say, I’ve been around Microsoft for a long time, and I think the last demo that I enjoyed as much as that was back in ’92, when I saw the first version of Windows NT demo multiple bezier curves in a multi-threading environment. Seeing those multiple processors all virtualized, and the ability to add hardware on a dynamic basis really provides some unique capabilities to the Windows Server operating system that we think will be very important to customers in the years to come.

Now, in order to make that work, as Bill said, we really need to do more than that. We need to provide an environment that enables management and making these solutions very manageable, and we want to provide a management solution that spans from today’s virtual server environment to the world of the Windows Server virtualization in the future. And so what I’m pleased to do today is announce the newest member of the Systems Center family, our management suite called Systems Center Virtual Machine Manager that provides enterprise management across a wide variety of machines that support virtualized images.

Virtual Machine Manager will provide the ability to manage the images that are stored offline, migrate workloads from one machine to another, and manage the overall set of templates that are used to create those images. With that, what I would like to do is invite Eric Winner up to do a demonstration of Virtual Machine Manager.

Eric, good morning.

ERIC WINNER: Good morning. (Applause.)

This is Systems Center Virtual Machine Manager, a centralized console for managing Microsoft’s Virtual Server, and Windows Server virtualization. Today, I’m going to show you how to use Virtual Machine Manager to rapidly provision virtual machines, and optimize your server utilization.

Many customers adopt virtualization to increase utilization as they move to modern hardware. As an IT admin, the first thing I need to know is which servers to consolidate. Virtual Machine Manager integrates with Microsoft’s Operations Manager to provide reports like this one, which clearly show my least utilized servers, and thus the best candidates for consolidation.

Now that we’ve identified the servers to consolidate, let me show you how Virtual Machine Manager converts a physical server to a virtual machine.

BOB MUGLIA: One of the real things that is interesting about Virtual Machine Manager is that it’s part of the Systems Center product line overall. We really believe that virtualization is a core capability of the IT environment, it’s deeply integrated into the operating system, and the management of virtualization needs to be integrated together with the other management tools that IT users. So, for operations management, Virtual Machine Manager works with Operations Manager to provide those set of services. For software distribution and configuration, it works with SMS or Systems Center Configuration Manager to provide those updates out into the environment.

ERIC WINNER: That’s right. So, driving from that MOM data, I enter the source computer information, and then identify the virtual machine which has defaulted to the source computer for a P2P conversion. The hardware settings of the virtual machine are also set based on the source computer. But, as Jeff showed earlier, with Windows Server virtualization I can change the now or later, even while the VM is running. Now that we’ve configured our virtual machine, lets assign it to a host and start it.

This brings us to capacity planning, one of the main features of Virtual Machine Manager. A key to maximizing your utilization is the placement of VMs on the proper hosts across your network, but calculating the optimal placement is a complicated business. Virtual Machine Manager solves this by integrating a set of capacity planning models developed by Microsoft Research. Intelligent placement allows Systems Center Virtual Manager to place VMs on the proper hosts and increase your utilization. The star ranking shows how well each host matches the particular VM’s requirement.

BOB MUGLIA: So, this takes into account the characteristics of the source application of the places where the application is coming from together with the characteristics of the virtualized environment, what else is running, how much memory is available, et cetera, in that virtualized environment.

ERIC WINNER: That’s right. And the setting can be scoped to a specific pool of servers, or across the enterprise. In addition, the setting for generating the rankings can be easily tuned and customized. So, now that virtual machine manager has done the hard work for me, I simply choose the top rated host, and that’s it.

Now, before we finish this task, I want to point out the view Power Script button. Everything you’ll see today can be easily scripted with our new Power Shell command line interface. Since this UI is layered on top of Power Shell objects, every action logs the commandlet in our audit trail to assist the admin in learning the Power Shell interface.

BOB MUGLIA: As I mentioned, we’re moving to a model of management where the configuration logic associated with managing applications in the Windows Server environment is stored inside Power Shell script, and the GUI is just layered on top of that. Exchange 2007 will be the first product to ship with that. The next generation of Operations Manager will also use that technique, as will Virtual Machine Manager.

ERIC WINNER: That’s right.

So we’ve seen how easily Virtual Machine Manager converts a physical server to a virtual machine to optimize your utilization. As you create hundreds, and then thousands of virtual machines, you must keep track of them.

Let me show you how Virtual Machine Manager organizes your virtual environment. I’ve chosen to arrange my servers into resource pools by business unit, but I can easily view them by state, owner, and operating system. In addition to these running VMs I also have to track the offline virtual machines, virtual disks, templates, and other VM building blocks. The template in particular is an important object that allows the administrator to encapsulate a complete set of hardware and software settings into an easy to use package.

BOB MUGLIA: The ability to manage these offline images, and to create standardized images through templates is a key attribute of virtual machine manager, allowing companies to set up an infrastructure of all of the different sorts of workloads and roles that they have in their company, then dynamically assign those to running servers in a virtualized environment.

ERIC WINNER: That’s right. And I could easily create a new VM using a template in that same wizard you saw earlier. But, instead of showing you that, again, I’d like to show you something even more exciting. Virtualization enables the data center to be more agile, but why stop there. Today it can take weeks to fill a request for new server. Virtual Machine Manager enables administrators and developers to create their own VMs in a manageable way. I’d like to introduce Virtual Machine Manager Self Service Portal, where I can see, manage, and interact with my VMs.

Now I’d like to show you how easily I can create a VM here. First, I enter my VM identity, and then select a template from among the sets published by my administrator, and that’s it. I don’t even need to select a host. Virtual Machine Manager uses intelligent placement to select the most suitable host from among the resource pools for this person

BOB MUGLIA: In many companies business owners really are the ones that are required to go out and make decisions around buying servers, and once the decision is made to acquire a server, it takes several weeks for IT to get it up and running, and get an environment in place. What this provides really is the ability for that business owner to work with IT, really get in a mode where they’re on their own, to go out and grab a virtualized session on an existing server, and get them up and running to solve their business problem much more quickly.

ERIC WINNER: That’s right. And as you can see, we’ve taken a task that can take weeks, and reduced it to three easy steps. Now in order to keep self-service manageable, the Virtual Machine Manager administrator controls who has access to specific templates, resource pools, and action. In addition, the administrator can set quota and expiration policies to further ensure that host resources are shared, and optimized. At any point the workload administrator or developer can see which of his resources are in use, and which remain.

So we’ve shown today how Virtual Machine Manager can optimize your server utilization, rapidly provision virtual machines, and centrally manage both Microsoft Virtual Server, and Windows Server Virtualization through our Administrator Console, Windows Power Shell, and our Self Service Portal.

Thank you. (Applause.)

BOB MUGLIA: So that showed a lot of great capabilities with Virtual Machine Manager, but there’s more that if we had time we could have shown. One of the things that Virtual Machine Manager will do is enable migration of workloads from one virtualized server to another, and with Windows Server Virtualization that will be able to be done in a live way, so that workload can be transferred to another server in just a few seconds, really not losing a beat in terms of serving, and continuing its capabilities on the network. So it’s a very full function environment, the combination of Windows Server Virtualization, which is really a leading edge, hypervisor-based solution.

I’ll point out that Windows Server Virtualization in the hypervisor is the first hypervisor that’s ever been designed from the beginning to take advantage of the virtualization capabilities that are being provided by both Intel and AMD in the chip, and really depend on those capabilities to do its tasks. That together with Virtual Machine Manager, which enables great management solutions on top of this environment, really means that the Windows environment has a great future in virtualization. As I said earlier, we really see virtualization being core to everything we do in the years ahead.

One of the things I want to also announce today, is that I said that there’s a lot of interesting things happening at different levels of virtualization.

We talked about what’s being done in hardware virtualization, but I want to talk a bit about application virtualization. In this space, a number of different companies have been making investments in making easy to manage Windows applications, and install Windows applications. There’s no question of the leader in this space and the company called Softricity that has worked with companies really of all sizes, some very large companies as well as some smaller companies, to solve the problems that they have of managing and deploying applications on Windows desktops.

Softricity has really cut costs and built a product that customers really find does an amazing job in their environment. And today we’re announcing that Microsoft plans to acquire Softricity and bring that into our family of products. We’ll be incorporating this technology into our other products over time, and we really look forward to working with the Softricity folks in solving these sets of very, very important customer problems.

Applications virtualization is a key problem. Microsoft thinks about virtualization across a wide variety of ways, including thinking about hardware virtualization, and system server virtualization, and with this Softricity acquisition we have a great base to go forward on with application virtualization.

So, as we look forward, Windows Server is doing really well today. Windows Server 2003 is a great, stable product in the marketplace. R2 provided a great sort of mid-life kicker to the product, and really took it forward and allowed customers to solve the problems they couldn’t solve before. Longhorn is clearly the future. There’s no question that as we move forward to our investments, the next generation of Windows Server Longhorn is the basis for that. We’re really thrilled to be getting the Beta 2 today. And we will be shipping Longhorn in the second half of calendar ’07. Now, that’s a long time between now and then. We have an update, a Beta 3, that will come in the first half of 2007, and we’ll be doing CTP releases between now and then to continue to refresh stuff to customers that are working with the product.

We have a number of different servers in production right now inside MSIT, and are working with customers as a part of our technology access program. We expect many customers to begin deploying Beta 2 of Longhorn inside their production environments. We’re taking the time to make sure we get this one right. There’s no reason to rush this to the market in the sense that 2003 is providing great services for our customers, and with the stability and reliability and dependability that customers have come to rely on with Server 2003, we want to make sure that Longhorn Server is a clear step forward, both solving new problems, while providing the reliability and availability that customers expect. So, we’ve got a lot of bake time in this schedule to make sure that the product is really ready for very broad production deployment at the time it ships.

As I mentioned, we’ll be shipping the virtualization capability within about six months of Longhorn Server, and that capability will be part of the core operating system. So that is something else customers can look forward to.

A lot of great stuff today. Bill talked about a lot of great stuff. Will talked about a lot of great things. In the server space, we look forward to tremendous opportunity working together with you to drive the industry forward. The real momentum that comes in the industry comes from the innovation that’s driven by the companies that the people in this room represent. We see tremendous opportunity in workflow spaces like small business and compute cluster. Great opportunity to take advantage of new networking capabilities and putting Windows Server as an industry standard computer in places where they couldn’t possibly scale before. Opportunities to simplify the administrative environment with things like Power Shell Command Script.

In the storage space, there’s a tremendous opportunity around network storage, particularly with ISCSI. Over the next few years, 10 gigabit Ethernet will drive the cost down of very, very high performance network-based storage, and have that attached to Windows Servers. Great opportunity in that space.

And virtualization really is the future of the server environment, and there’s a huge opportunity to build solutions around that. You guys make it all work. We really appreciate all of your support, and the partnership that we have together. It’s the combination of the hardware industry together with the innovation that all of the software vendors provide that really enable customers to solve these business problems in an industry standard way that they could never solve before.

Thank you very much. Have a good lunch, and a great show. Thanks. (Applause.)


Related Posts