Remarks by David Thompson
Corporate Vice President, Microsoft Windows Server Product Group
Windows Hardware Engineering Conference (WinHEC) 2003
New Orleans, Louisiana
May 7, 2003
TOM PHILLIPS: It’s my honor to introduce David Thompson. Dave is the Corporate Vice President and the Development Manager for the Windows Server Technology, Windows Server 2003 product, and he’s here to share with us both some of the perspectives on the product that he’s just shipped, as well as directions and server futures.
So, with that, I’d like to introduce Dave Thompson.
DAVID THOMPSON: Good morning, Tom.
Good morning, and welcome to the Server talk, the Server talk will be a little different. With servers, silence is a virtue. But behind the immersive experience, behind the mobile experience and behind the distributed fabric of today’s computing, the servers are the critical backbone.
There’s another supporting theme to this morning’s talk, and that is that I’ll probably use the word “we” a lot, and when I talk about “we” I’m not talking about Microsoft, I’m talking about all of us, the partners that have built the PC industry and the PC based servers that have been so incredibly successful in the last ten years.
So, I’m going to talk a bit about what we call the Windows Server System, Server 2003 which we just launched, our vision, the thing that guides us as we formulate our plans going forward, our strategy for working with partners, and the road map for the releases that we have planned, and then we’ll shift to talk about the future technologies that are under development in three major areas: manageability and simplicity, stability and performance, reliability and availability. And then, lastly, talk about how we enable industry innovation, again, in partnership. And then, of course, a call to action: what do we do with it.
Windows Server 2003, we actually released at the end of March, as you may well know. We launched April 24th, we started a launch. We are running a launch that will reach 156,000 people worldwide. The theme of the launch is Do More With Less, which is the problem that I see users in really any size of business space today squeezed with increased competition on one side, and decreased IT budgets, flat or decreased IT budgets on the other side. So, Do More With Less is the theme.
This release has two major characteristics. It’s incredibly customer focused. There are about 650 features, if you will, ranging from simple features to infrastructure changes, infrastructure changes like the storage infrastructure that enables Windows, in concert with partners, to provide effective customer solutions in ways not possible before. Features that make it easier to deploy the operating system in IT infrastructure, or as an application platform. And that’s another key point today, and that is that making it easier to deploy software is an area that we’re making huge investments in, and it’s the thing that will jointly benefit us most, because in most cases it’s not the cost of the hardware that stops deployment, it’s the challenges of actually making the transition to a new version of the operating system, or a new platform, new application platform. If we fit those challenges, you guys can sell more hardware of every type.
This slide shows the basic model for how we think of the audiences for the server. There are three basic audiences, IT infrastructure, and what we tell people and show them with case studies that we developed before we released the product is, we estimate we us a single number, 30 percent cost savings in IT infrastructure. It really ranges, it varies widely both more and less than that. One of the things they said was, this release is very customer focused. We actually, by the time we released, we had tested this release in production in 10,000 servers with 100 plus customers. The customer feedback on this has been amazing.
When I was at the launch I had the opportunity to meet with many of the customers who took this on rollout, rolled out far more servers than they planned. The CIO of Safeway who did some early experimentation with a few servers, and ended up rolling out 1,500 servers on production on pre-production code. The product is rock solid.
Application platforms, we say you can build applications that are twice as fast in half the time. And that’s a combination of the tools, VisualStudio .NET tools Version 1.1, which is called the Everett Release, which was released with the Windows Server release, and announced with Windows Server 2003. And the underlying infrastructure, it’s just a redesign of IS 6: better security, manageability, and performance. It is the best platform for .NET framework applications.
And then, third, it’s information worker infrastructure. Writing infrastructure worker productivity through the combination of storage technologies, the ability to recover files, be able to access files remotely anywhere in the field up through the distributed file service, and then to be able to collaborate with a richer, very rich set of collaboration services in SharePoint Version 2.
So, those are the three ways we think about it. I said quality was the other major element: customers and quality. We actually ran reliability tests on 100 production servers for three weeks, and we would not release until we reached 99.995 percent availability, which is better than Windows 2000 in the field with the latest service pack with the best operational procedures. We actually achieved 99.998, and that was validated in a set of 100 production servers. It was actually between that and the very high bar on security where we made major investments in reviewing the entire design of Windows.
It was actually fairly nerve-wracking. We waited about a month longer than we anticipated, but that will result in driving the quantity the deployment of Windows Server 2003 up dramatically higher. With the customers today, we’ve got a rock solid reputation. And customers with a very high percentage of NT4 systems, where the hardware is so old it’s impossible to support, the performance of the new systems – both the hardware and the software – is so much better between NT4 and Windows Server 2003, that would be the primary driver of volume. And the analysts, the first prediction I’ve seen, which was from IDC, actually said that we would upgrade or that the mix of Windows Server 2003 servers versus 2000 after the first year of shipment, the rate would be about 50 percent higher than it was for 2000. And starting at zero, that actually means that the run rate is going to be a very high percentage, probably 60 or 70 percent of the run rate within 12 months, that’s the anticipated update based on the different factors, where the customers are, and the product itself.
So I won’t spend any more time on this, the key point is that this is a place that you and we can make money and make customers happy.
So this is the vision for the Windows Server. Basically, it’s a platform for development, deployment and operations. Okay, all the elements of the life cycle of applications and services that federate seamlessly. That is, between organizations, you can establish in the easiest possible way the security context, and the operating context to efficiently do transactions between different parties, to federate seamlessly and scale without limits. Scale up to very large systems, we’ll talk more about that. Scale out by adding racks of servers, cost effective servers. Down: delivering services to any device, and these devices will drive up the need for servers, lots of communication load, there can’t be any state on these devices that’s authoritative. It’s all got to be delivered by services to really have the immersive mobile experience, and a way out to geographic, to span geographic boundaries.
We’ve always built distributed security infrastructure, so that it spans – you could operate globally in any topology, at any scale. But, in fact, the availability infrastructure, such as clusters, need to support geographic clustering, because of the very heightened awareness of the need for disaster recovery infrastructure. So that’s the vision. The strategy for partnership is pretty straightforward. We invest and innovate in the platform, and we innovate based on those investments and based on collection. We invest in software architecture, the OS, server applications, and I’ll talk more about that in a second. And in developer tools and support for developers. So that goes back to the key, full life cycle: develop, deploy, operate. Focusing on delivering value across that is our strategy. We engage deeply with hardware partners, and collaborate on both hardware and software architectures.
The things I’ll talk about today we certainly have spent a lot of time developing, where we’ve decided to invest, what we think the changes should be, where we should go going forward. But, we’ve done that in concert with you. And so we will continue to do that and take feedback, and try to develop the best vision for where we should make our investments to make us collectively successful. And lastly, strong marketing support for Windows-based solutions. We’re going to spend about $250 million on Windows Server 2003 launch. In fact, some of the press that I talked to said, is this the time to be making a big investment into a product like this? And our view is, absolutely, because it saves money at a time when people need to save money.
So we’re making that investment to make it successful, there’s even television ads. I don’t know if you’ve seen the television ads for Windows Server 2003. I was about floored when I was watching NYPD Blue, and I see the guy talking about his Active Directory rollout. It’s actually pretty cool to see a product you’ve worked on prime time TV. Obviously, it happens to a lot more of you than it does to me. So we are making deep investments in the marketing of Windows Server 2003, and Windows-based solutions overall.
I said it wasn’t just about the operating system. This is the Windows Server system, and it really takes that same architecture and those audiences, applications, platforms, information workers and IT infrastructure, and it extends it above the OS. You can see the different server products for Microsoft under applications server – like Content Management, BizTalk, SQL Server – that are key parts of the application platform. And in IT infrastructure, the SMS, the management operations products – ISA, the firewall server – and then information worker – the server for projects, Exchange, the SharePoint portal service that integrates the SharePoint services that are in Windows. And Small Business Server, which we’ll talk about actually a little bit later in the talk.
The key thing is that what customers need is integrated solutions, and that’s what we focus on. And we do that in a very deep way. An example of a very specific case of this is in this release the Windows Server 2003 team, the Exchange Titanium team, the Exchange 2003 is due out later this summer, and Outlook 11 team, those three teams collaborated to provide the best solution for end users and IT pros that we’ve ever provided in a messaging infrastructure. And the results are very impressive. Outlook, when working with the Exchange Server, to put it bluntly, actually does what I think it should have done a long time ago: it uses the network whenever it can get it, as efficiently as possible, transparently.
If you haven’t run the beta yet you should get the beta. And it’s an area that they continue to evolve even in recent weeks, to tune and enhance, because it’s a fairly complicated problem. But, it works pretty well now. And the deployment and migration from one Exchange installation up to 2003, from Exchange 5.5, those tools that the Exchange team worked on to ensure it would be as easy as possible to roll out these new high value solutions. The integration of Windows Server and the server applications in this common architecture, integrated solutions are the thing that drives deployment.
Here’s the Windows Server road map. It shows the evolution from NT 4, Windows 2000, and then the product release we actually added one additional version, the Web Server edition, a focused web application version of the server that costs less, but only runs web applications, and then what I call Blackcomb, really is just a list of future technology areas, not to be released in three or four years, for example, but actually you’ll see as I move onto the next slide, see that these are things which we’re actually rolling out additions to the operating system at a pretty steady stream. This stream happens in just calendar year ’03.
In June we’ll bring out an I-SCSI initiator, a key piece of virtualization of storage across the IP infrastructure. NAS, network attached storage version three, will come out before the end of the second quarter. It basically is built on Windows Server 2003. It is simplified in terms of the minimized services, high security. It has the same performance benefits of Windows Server 2003, along with the pre-configured OEM deliver of a fixed function, easy to administer, storage focused product. By the way, our NAS product, the Windows NAS product delivered by OEMs has about 38 percent of the worldwide NAS market. So, it’s been very successful.
I made the point of services that have probably been mentioned, I’ll talk about this more. This the first bubble of technology that we’re delivering as part of what we call the Dynamic Systems Initiative, and that will deliver in Q3. Small Business Server 2003, the version of Small Business Server built on Windows Server 2003, and actually refactored to provide a very cost-effective, well-integrated first server for small businesses. Virtual Server is a VM, a virtual machine technology. We acquired a company, and the engineers of the company came to work at Microsoft, they’re in Redmond now, it’s called Connectix, and we acquired this technology to support the migration and consolidation of server applications and OS’s to enable customers to buy the latest hardware to move forward with their applications and then make transitions in their infrastructure to newer versions of the software over time. There are a lot of different scenarios you can support actually with Virtual Server. You can support demonstrations, test scenarios where you test mobile machines, mobile virtual machines on the same machine. We use it broadly within Microsoft for such applications. For development and test on the same environment, and we’ll talk more about that in a minute.
And then, lastly, AMD 64 support in Service Pack 1 for Windows Server 2003.
So, that’s just a set of things, what we call out of band releases, that is not at a major Windows release, but effectively part of Windows, delivered, you can load from the web the service as part of Windows through the Standard Service Pack stream. This is just what we’re doing in 2003. So, the message is, innovation does not have to wait for major releases.
What I would like to do now is bring out Sean McGrane, he’s going to give us a demonstration of Virtual Server, and some of the things that I talked about, how we support consolidation scenarios.
Good morning, Sean.
SEAN McGRANE: I would like to start by talking about the architecture of Virtual Server. Virtual Server creates multiple virtualized hardware environments on a single server system, and operating systems can then be installed on each of these virtualized hardware environments. And that operating system is referred to as the guest operating system. Applications can then be installed into that guest operating system. And the combination of the virtualized hardware, the guest operating systems, and the applications is referred to as a virtual machine.
Virtual Server will run on any Windows compatible server system, and for today’s demo we have a Dell 2650 with two hyper-threaded Xeon processors, and two gigabytes of memory.
DAVID THOMPSON: So, it runs on a Windows machine because the host operating system is actually running directly on the hardware?
SEAN McGRANE: That’s right.
DAVID THOMPSON: Virtual Server is running as part of the host operating system.
SEAN McGRANE: That’s right.
So, the Windows operating system that’s installed on the server is referred to as the host operating system, and today we’re using Windows Server 2003 Enterprise Edition as the host operating system. So, I would like to bring up the administrator console for Virtual Server. The product I’m showing here is the customer preview of Virtual Server. This is a pre-beta that was made available to our customers so they can evaluate the product and provide us with feedback of what they think of the feature set.
DAVID THOMPSON: Okay. So anybody can any of you folks can go up to the Web today and load down this preview that you’re going to see a demonstration of.
SEAN McGRANE: That’s right. Yes.
So, in the top right-hand panel of that administrator console is a list of all the virtual machines that have been created in this instance of Virtual Server. I have pre-configured three server operating systems in the virtual machine. I will start them up now.
So, one of these virtual machines has Windows 2003 installed as a guest operating system. Another one has Windows 2000 installed as a guest operating system. And the third has NT 4 installed as a server operating system. Once all three of these operating systems are up and running as they are now, there three heterogeneous operating systems executing in an isolated environment simultaneously on a single standard server.
DAVID THOMPSON: So, for instance, somebody wanted to install Windows Server 2003, run some applications or some of the services included in the OS, they could then take an NT 4 Server, the OS and the application, and run it in one of these VMs, and then get rid of the old server?
SEAN McGRANE: That’s right.
DAVID THOMPSON: That’s the basic consolidation here. I’ve got it.
SEAN McGRANE: So, what I would like to show now is the virtual machine view for server. One of the points about Virtual Server is that the hardware is virtualized to the guest operating system as distinct for every virtual machine. So, what that means is, Virtual Server maps the virtual machine usage of those virtualized hardware resources onto the real hardware resources that are available in the server.
So, what I would like to show here is the device manager in the guest operating system in this virtual machine, and compare it to the device manager for the host server. As you can see, there the graphics adapter in the virtual machine is an S330. This is a virtualized device, and the same virtualized device is seen by every guest operating system on every virtual machine. And the real graphics devices are Rage XL.
DAVID THOMPSON: Okay.
SEAN McGRANE: So, one of the two things about this is that every virtual machine sees the same hardware environment. So, you can take one virtual machine that’s created on one server, move it to another server with a totally different hardware configuration, and it will run on that server.
DAVID THOMPSON: Good.
SEAN McGRANE: It makes the virtual machine styles very portable.
So, what I would like to do now is to bring up the virtual machine view of NT4. One of the things the main thing about this is, as of June of last year it’s no longer possible to certify NT 4 on new server hardware. What the virtual machine environment allows is, it allows you to run NT 4 on a brand new Dell server. One of the features that isn’t supported on NT 4 is hyper threading code, but because the hardware is optimized for the virtual machine, we can run NT 4 in a virtual machine on a system that has hyper threaded processes.
DAVID THOMPSON: And that Dell Power Edge 2650, that’s the one we’re actually running on?
SEAN McGRANE: That’s right. I’ll just open my computer and we’ll have a look at the C drive, this is the virtual machine My Computer, and it thinks it’s got a C drive, and a C drive is a Windows partition, it has a standard set of subdirectories. In reality, this is one big file on the host system’s file system, and every time the OS or the applications write to disk they actually write to this one big file. We can go have a look at this file. So here we have three virtual hard drive files, each one of these is the bit partition for each of these virtual machines.
DAVID THOMPSON: Okay. And these have the actual virtual drives, the data on the drives, and configuration and the binaries and everything?
SEAN McGRANE: All the information, all the context of the virtual machine is included in this one file.
DAVID THOMPSON: So that’s the file you could pick up and move to a different virtual server?
SEAN McGRANE: That’s correct, yes.
DAVID THOMPSON: Okay.
SEAN McGRANE: Okay. So I’ll now go look at the sub-virtual machine. This virtual machine has Windows 2000 installed. So I’ll bring up the task manager quickly. And the task manager shows the host resources have been made available to this virtual machine, so the virtual machine has access to one processor, and has access to 128 megabytes of memory. At the time the virtual machines are configured the administrator gets to choose what host resources are assigned to which virtual machines. So that the allocation of the host resources is totally controlled by the administer through the use of the menus on the product.
DAVID THOMPSON: So this guy’s got 128 meg of the actual 2 gigs that that guy has, and you can set that to anything you want it?
SEAN McGRANE: That’s correct.
DAVID THOMPSON: Good.
SEAN McGRANE: Okay. So I’ll just quickly jump back to the administrator console again, and talk a little bit about manageability. All of the user interface is created using an extension of COM API that’s made available by the virtual server. It’s then remoted and can be viewed using any standard browser, we’re just using a standard web browser to look at the virtual machine. What it means is that everything that was shown today can be scripted and automated using the COM API.
DAVID THOMPSON: Okay.
SEAN McGRANE: So just quickly to explain what we’re looking at here, the virtual machine events are detailed in the bottom right hand corner of the screen. We already talked about the virtual machine that was in the top right hand side, and otherwise we have various management features that allow you to create machines, and create virtual networks, create virtual disks and various other stuff. Okay.
There’s also a configuration option that’s available for each virtual machine that allows you to go and reconfigure the host resources allocated to the virtual machine. And a browser can be used to connect locally to a virtual server, and you can create multiple different virtual server instances from one box.
DAVID THOMPSON: So any virtual server is completely remotely manageable and configurable, and scriptable.
SEAN McGRANE: That’s correct.
DAVID THOMPSON: Okay. Great.
SEAN McGRANE: So basically that’s all I have to say. There’s a breakout session later on when we’re going to talk in some more detail about the architecture and talk in more detail about the road map.
DAVID THOMPSON: Okay. That’s great. Thanks a lot.
SEAN McGRANE: Thank you.
DAVID THOMPSON: All right. So just to recap, there are many different ways that VM technology turns out to be useful in our business internally, in your business, in our customers’ businesses. But, the thing that drove us, the prime driver for this for us was to enable migration of applications on older operating systems, like NT 4, and enable them to move to new hardware platforms with Server 2003.
Okay. So let’s shift gears and we’ll talk about the future. The key initiatives that we’re working on going forward, and again, these are defined for us by talking with customers working with you, collaboration with our partners: with hardware partners and service partners, and software partners. And the key initiatives are management simplicity: the number one issue, scalability and performance. We’ve made some great progresses that we’ll talk about. And reliability and supportability, and how those enable industry innovation. And in each one, I’ll talk a little bit about the key challenges that our customers face, what are the investments we’re making, and what are the opportunities for you, our hardware partners.
The key challenge, which I said already, at least once, is around deployment, installation and maintenance. You know, with NT 4, we, together, enabled grassroots server computing. Server were deployed in places that nobody ever dreamed that servers would be deployed. They were deployed for all kinds of different reasons. They were deployed, as IT people think, without controls, but in a way that really changed the way server computing was done. At the same time, that flexibility and that deployment in a broader range of places created challenges around the management, deployment, operations and management of these systems. So managing multiple server coherently, it was easy to add servers for scale, but it actually got expensive to manage them. Evolving server designs, in and of themselves create different challenges, racks and racks of servers create challenges around the scale and speed with which we can deploy and configure servers. You can build a great rack of servers, but you can’t roll the software out to it effectively. If you can’t run the server in diskless configurations, you can’t take advantage of the full capability of the hardware design.
Storage management, simply driven by the decreasing costs of storage, enabling massive use of storage, is a problem that is a challenge in many, many different ways. And then lastly, small business: rich market opportunity, but the challenges of providing the same kind of IT capability that large enterprises have, providing those to small business customers in a way they want, and in a way they can handle, or can be handled with partners, big challenge.
So the things we do, the places we’re making investments, and I’ll talk specifically about two of these, are the dynamic systems in initiative, which I’ll talk about, consolidation technologies, which we did just talk about one. Okay. We’ve also invested in a fusion technology for DLLs, for DLL isolation. And in the Windows system, in Windows server, a resource management tool that lets you allocate percentages of resources in a single machine to different applications. Those are examples of things that we’ve done to support consolidation in 2003. And then going forward, investments in dynamic partitioning and things like that also support that. The Windows storage technologies – there are a number of talks on this – these are the things that enabled in 2003 the ability, for instance, to build single providers and back up many applications. To have a common API for managing storage, to enable applications to actually be able to manage storage in a cost effective way. Those are some of the examples of investments we’ve been making. And in Small Business Server 2003, we’ll specifically see in a minute.
Hardware opportunities, designing for DSI, we’ll talk about that. The consolidation offerings, customers want a combined end-to-end offering, so a combination of the technology, services, and the hardware you’re building is the thing that customers want. I mentioned the storage technology and VSS, supporting those is critical.
And then, lastly, engineering servers with small business needs in mind. This is actually one of my favorite examples. I happen to run SBS at home, and have for a number of years, and it’s a great experience. I recommend it for everybody. Everybody should have a Small Business Server at home. But the experience, I went to an unnamed hardware OEM as a regular customer with my credit card, and through a small business pass bought a server, brought it home, put it in my office, and quickly realized that I couldn’t stand to have it in there because of the noise that’s being made.
Okay, small businesses don’t always have soundproof closets to put servers in. So, I went to the trouble, I drilled some holes in the wall, I ran some cable, another piece of cable into my furnace room. I put the small business server in the furnace room and, you know what, today, with the furnace running, I can still hear the Small Business Server. The sound is an example of something I think we could do a little better.
One of the other guys at work that rolls out servers at home to really fully understand the experience uses laptops. Only, they’re not necessarily cost effective, and they’re not fast, but they are quiet, they are small, and they have a UPF built in because of the battery. Small Business Servers have wireless access points. This is a great opportunity here. And there’s so many places where small business servers, if we make it simple to use the software, and you make it simple and easy and fit with the environment to use the hardware, then we can make a lot of money.
Okay, with that, what I would like to do is bring out Guy, a product manager with the Small Business Server team to give you a demonstration of Small Business Server, and show us how in 15 minutes you can rollout an IT infrastructure for small businesses.
GUY HAYCOCK: Good morning. Thank you.
What I have here is a low cost, high volume Intel Server. And it’s been built and shipped with the OEM pre-installation that we’re working on for SBS 2003. This product should be in the market in four to five months now that we’ve launched Windows Server. Since you don’t want to see me type, I’ve pre-populated the product keys, and I’ll skip through two screens, I’ll skip through the license, and I’ve skipped the regional options. Let me just get cracking.
I’m going to skip the date and time, and now we’ll see one screen that you might have seen some more. So this is some data about the small business. Let me just put in some, so we need this because we’re going to deploy the extra software. For simplicity, I’m going to call this server ‘server’. Typically in a small business that they only have one, the list makes it as easy as possible. Because we’re deploying the Active Directory, I need to do some domain name and fully qualified DNS names. From security best practices, we want a different external domain name and an internal. That’s what you see here. I’m just going to accept that.
DAVID THOMPSON: And it’s defaulted that?
GUY HAYCOCK: Yes. It’s defaulted that.
DAVID THOMPSON: Actually, if you just followed the instructions, you’d do the right thing.
GUY HAYCOCK: Right. I’ve got one more screen to come. It’s really common in an SBS environment to have two network interface paths, one for the external and one for the internal. And here is the setup for the internal. We deployed DNS directly. It’s required by the AD. I need to pick this up on the OEM preinstall. And we’re done. We’ve reduced the customer experience. It used to be two to three hours down to 15 minutes.
DAVID THOMPSON: Two to three hours if you knew what you were doing.
GUY HAYCOCK: If you knew what you were doing, down to 15 minutes or less depending on speed of the hard drives and the like.
Let’s switch to a pre-configured machine.
DAVID THOMPSON: We and our hardware partners.
GUY HAYCOCK: We and our hardware partners. This is the screen that the customer would see when both us and our hardware partners are working and doing this with SBS 2003. This step would be to use the 30-bit processors. The next step would be to connect the server up to the Internet, standard wizard based. But, again, I need lots of data. I need to know about the type of connection. If it’s a broadband I need to know what type of connection, do I have an IP address, et cetera. This is relatively complex for small businesses. We can do better than that. So, working with a high speed Internet provider, we can skip all of this setup. And now, when I get high speed Internet deployed to my small business, I also get a floppy with this script that’s been pre-configured. It’s able to be processed to my domain name, WinsletToys.com, they can set up all of the high speed Internet settings, even right down to what my e-mail address is going to be.
DAVID THOMPSON: All right.
GUY HAYCOCK: Let me cut down that wizard. And for completeness, I’m going to run that script, and actually connect SBS and Exchange up to the Internet. While I’m doing that, let me do one more step. One of the pieces that’s hard is often configuring desktop PCs. So this is my SBS Server management setup, and I’m going to have a look at client computers. I have a number in the network, let’s have a look at yours. We have the Active Directory deployed, but we can push down software settings, and even applications, and deploy them directly on the desktop without any hardware by the user.
DAVID THOMPSON: So this is driven by AD, but it’s actually a very simple view of AD, just a flat list of computers or people.
GUY HAYCOCK: Yes. Okay, let’s take that one step further. You just employed me in your small business. And you bought me a new PC. Again, OEM preinstalled, here’s a PC, I’ve stuck it in, turned it on, and done the Windows XP install. So, let me log on to the desktop, and I don’t really have much in the way of anything. How do I get the PC connected to the domain, how do I get those applications running? It’s really simple.
You simply go to the server name and client setup. I just put this on a Post-It Note with the user name and password. It’s that simple. And click on this link, which asks me, what’s your name, what’s your password. It goes to Small Business Server, finds all the users that you’ve already created. I’m going to say that only Guy can use this PC. Down at the bottom here is if I wanted to migrate data, if I were adding SBS. I’m just going to skip that for the moment. What computer name do you want? So, we’re going to add this Windows XP machine to the domain, change its computer name, and deploy a set of desktop applications. I’m done.
It’s going to take a few moments, so let me show you a completed PC. Let me log on, and see what we get as a result of that automated client deployment. And you run the browser, and hopefully we have a little bit more of a configured environment. We have Windows SharePoint Services. It’s going to take a second to load. SharePoint is the new space for people within the business to go to create and collaborate on documents.
DAVID THOMPSON: Okay.
GUY HAYCOCK: It’s shipping out of band with Windows Server later this year, and we’ve taken it and taken it one step further by customizing it for small business.
DAVID THOMPSON: So this is their internal company Web, basically.
GUY HAYCOCK: You bet. This is the place to go to work on and share documents. For example, here’s the Peanuts script that I built for the demo. I can check it out, I can edit it in Microsoft Word, I can discuss it with other team members, it’s a very rich collaboration environment. This is standard SharePoint services.
Now, in the SBS environment, we take it one step further. We build a location calendar, so you have one place to go to view events that are important to the company, and perhaps people’s holiday schedule, for example.
DAVID THOMPSON: I didn’t know I was going to Spain.
GUY HAYCOCK: You know, things change quickly.
DAVID THOMPSON: Good.
GUY HAYCOCK: We also build help desks. It’s very common in the small business environment that there are technology providers that are helping these small businesses get the most out of their technology investments. So any of the users within SBS can create an item in the help desk, and I’ve done just that. And then the technology provider remotely, and I’ll show you that in the next demo, can gain access to the Windows SharePoint, see those incidences and resolve them, all without going to the site.
So let me do favorite feature. Let’s have a look at Outlook. Many of us have built a new PC, and had to go and create an Outlook profile. You’ll notice in this case, all of that was pre-configured. I launch Outlook, it works, and I even have some welcome e-mail from the Small Business Server admin, pointing me to those rich resources such as SharePoint remote access.
Now, one of the key features I like with Small Business Server is the ability to access the server when I’m not in the small business, from any desktop on the planet, basically, that’s running a modern browser. So let’s do that in a full form. Let me it’s secure. So I simply go to my company’s Web site, slash remote – that’s all I’ll ever need to remember, from any PC, my brother-in-law’s, it just doesn’t matter. Enter my name, I’ll be you for the moment, and this is the central, single point to get access remotely to those resources. So built on Exchange 2003, I have Outlook Web Access, rich environment with all those same e-mails. I could go to SharePoint. The feature I really like is to be able to get access to my desktop back in the small business from anywhere on the network.
So I’m going to connect to mine, since we ought to be on yours. So this is a terminal services proxy, using Small Business Server. So the desktop that we’re using is not directly connected to
DAVID THOMPSON: I don’t get to access the local?
GUY HAYCOCK: There are some different profiles, and I set you up with more security than me. But, this is my desktop. So to remotely access anywhere on the planet, all I need is a browser. And you would typically use this instead of driving back to the small business, or to get access to perhaps an older application that doesn’t work well across the wireless network. And if I close that down we just go back to the remote user portal, the one place to go remotely to get access to small business resources.
DAVID THOMPSON: Great.
GUY HAYCOCK: So thanks, Dave.
DAVID THOMPSON: Thanks a lot, Guy.
So there you saw it: 15 minutes to a full IT infrastructure, including one that locks people out from remote access when they’re not supposed to have it. So that’s how we simplify deployment of servers in the small business. And let me remind you, the complete infrastructure is really a solution, it’s not just a server. It’s a server-centric solution, including the client. In the data center, the key initiative for us for simplifying deployment of servers and applications is the dynamic systems initiative. I think Bill Gates talked about this yesterday. There are three key elements, just to recap. First, it’s an end to end architecture, that means an architecture for a platform that will support development, deployment and operations. That’s what I mean by end to end, building integrated solutions, develop, deploy, and operate. There are a lot of deliverables across Microsoft, including the operating system, the tools, server applications, systems center is our new enterprise management tool. All these things can be built around DSI. And then lastly, it’s all about partner support of the initiative in terms of building hardware that conforms, and supporting drivers, and systems integrators being able to build solutions on DSI. We’ve just announced that we expanded the set of partners from the initial set in early development.
So DSI: the key is really bringing together development and operations. Okay. Over the last few years, I’ve talked to a lot of CIOs, and seen how they run their infrastructure. And one of the companies I was most impressed with was a company in the retail industry. And I can’t mention the name, but the key to them being very cost effective, very agile, and having a very reliable infrastructure that runs literally tens of thousands of stores and servers, is all about the fact that their development people, and their operations people work together. And their operations problems, the development people were there fixing them, they were there fixing the problems, changing the applications, changing the infrastructure, and over time they took what was a daily two-hour meeting discussing a problem from the night before, down to 10-minutes, basically saying, nothing happened, or very few things happened.
We also then looked at an advanced research project called SIG, at the design pattern, how people ran large Internet services. And they do it with software, they do it with customizing and building very sophisticated and automated management tools. It’s the only way you operate complex systems at scale. But, it was all customized. What DSI is, is DSI does the natural evolution in the operating systems business, it takes those design patterns, the themes, those trends, and it captures them in a platform. That platform is a platform where you can bring together development of the application, and operational input, and then in the operation of the application continue to evolve the design of the application to make it more and more operable.
The phrase used to be design for manufacturability. Well, this enables design for operability. So just to step through the top level steps. The first thing you do is describe the application structure and its operational requirements, the policies that you understand. It may not be all the policies you eventually implement. Describe those and the tools do a lot of that for you with VisualStudio .NET, but you can also manually describe all these elements. What are the implementation guidelines, right, what is the architecture of the application, the way servers fit together, with back end servers, front end servers. What is the hardware that it is deployed to, that it needs to be deployed to. What are the IT policies. Then the system will take this model, the SDM, and it will automatically deploy that application across a platform of distributed resources. It provides a very cost effective use of resources, provides the best real-time adjustment to operational issues, and then essentially it lets you automate to the degree possible at any given time the operations based on that model.
Now, when I say to the degree possible, what happens is in certain constrained scenarios you could take an application, or a third party would develop an application using this model, and they would design it for a broad range of scenarios. When you put it is in a specific one, you could have specific recovery policies that are well understood in that specific environment. Like what to do when you need more storage – you run out of disk space, you want to defrag, you want to allocate some more from the pool. What do you do for failures, where do you get replacement pieces. And so the operations team can use the same platform to add that, and then moreover, as we begin to trust the system more, understand more about how they want to recover. They can easily feed that back into the model.
So coupled together, the development and the operations process, and build a platform, that’s what this basically is, this is a new platform. And this is the architecture platform: at the foundation is the hardware resources, hardware resources that are storage, that are networking, networking and switches and load balancing, firewalls, and then computers. On top of that, the software that we provide is basic resource management. The software that you provide is the software that takes those resources and describes them to the system, it provides effectively drivers for the resource managers. And then on the top is the automation and control center, the SDM-driven automation and control services that will then use the resource manager, use the logic in the SDM, in that database, and roll out, deploy the applications. And this happens in steps, to be clear, it doesn’t all happen on day one.
Windows Server 2003 is the base, and then when I say additional capabilities in 2003, the auto-server provisioning, and virtual server lets you virtualize hardware resources, so you can use them effectively. The automatic server provisioning, ADS, the Automated Deployment Services, let you roll out images and affect switching control over racks of servers. In 2004 and ’05, we’ll bring out the next level of integration with tools, where the tools can describe the application, describe the hardware that the application needs to run on, and you can do a validation, design-time validation, of a match between the hardware and the software, the hardware resources and software design, and then you can deploy with basic resource management you can deploy that application onto a set of servers.
Then the last step, in 2006 and ’07 we’ll support the full automation development by third parties, development of line of business apps with tools to capture all the operational policies and let you update operational policies back to those control services that are driven by the SDM. You make business policy changes, you can actually directly drive those by changing the model, directly drive the operation of the application and its use of the hardware resources. DSI is the data center initiative around management and simplicity.
In the area of performance, we have made a ton of investments. We’ve actually made outstanding progress, and this is definitely the ‘we’ that is Microsoft and our hardware partners. We’ve supported the Itanium processor. You’ll just see a couple of great examples, the IBM E-Server X450 is the first time the four-processor Itanium chip has been offered in the very successful X-Server line. AMD 64 support we’re investing in, and will come with Service Pack 1 up through the end of the year. And you can see the new Isis AMD Opteron based server, four-processor server over there.
We’re investing in scalable offload architectures, TCP/IP, and effectively enabled the systems to be able to utilize 10K Ethernet as it becomes available and deployed, and also offload processing cycles from the subprocessors. We’re doing that in a way that we can offload the most stable, best performance, costly elements of the protocol beyond what’s been done before, such as simple system offloading, as well as changing the structure called receive-side scaling to allow multiprocessing machine to more effectively handle the incoming load from networking sites. So, we’re making investments in driving up the components, and it’s an opportunity for partners, both in networking hardware and in systems.
Resource management, dynamic partitioning I’ve talked about in the management concept. We are supporting dynamic partitioning in the next major server release, and then we’ve invested in tools around high-performance computing, very high, scale up scenarios. The opportunities to get the most scalable computers in the world running Windows, and to build Windows storage solutions to support storage management, and enable scaleable, cost-effective networking.
So, here’s the proof, You probably already know this, but the Wednesday, on the 23rd of April, NEC announced a 32-way Itanium-based machine, 540,000 TPMC, the highest number ever at time, beat out 128-way Star-based system from Fujitsu. They had been number one for a long time. And then, the very next day, HP announced the number one number, 658,000 TPMC on a 64-way Superdome Itanium-based system.
You look at this, one processor, two-processor, four, eight, 16, 32 and 64, we, that’s the we of the hardware OEMs and Microsoft, the Windows team and the SQL team, all collaborating together. We hold the number one performance numbers in each category, and the number one price performance numbers in each category. And, as I said, all with partners. These are just a few of the partners, to be clear. These are the partners that get listed in the TPCC results, there’s networking and storage partners, all of which collaborated. The reality, as you know, is engineered in the lab at both companies constantly tuning and enhancing and driving forward the technology. The bottom line on scalability is, you used to say Windows doesn’t scale. The bottom line is, Windows can run any case you know.
Okay. Reliability and supportability, and availability. Key challenges. Unplanned outages, planned outages. Diagnostics and troubleshooting are the things we probably get the most feedback on, it’s complicated and difficult to unwind some of these, particularly in some of the distributed scenarios. And then drive quality, an area where we get it, we’re much better than we were, but we still have a ways to go. We’re making all kinds of investments in supporting high availability hardware, in clustering, with eight-way, and making it much simpler to set up. We’ll continue to make investments. Reboot reductions, eliminating reboots in configuration changes for planned outages, and a technology called hot patching, which helps reduce the need to reboot when we make hot fixes. We also focused on reducing the number of hot fixes in 2003 with our security work.
We built the new diagnostics and repair framework going forward, and one of the most interesting things, we run a reliability service where customers can measure reliability the same way we did before release, and they can see that we can get feedback on reliability issues with the OS, you can get feedback on reliability issues with the hardware, they can get feedback on reliability issues around their operational procedures. We’ve simplified, and this has been a steady evolution since the early days of NT, simplified our architecture, making it easier to develop more reliable drivers, and in the feedback process, both for applications and also for drivers with the crash analyzer, that was actually initiated by the Office team in Server 2003, and we provide feedback to our partners when there are issues in their software.
Lastly, Windows Update. We’re making huge investments in unifying Windows Update, and then a version of Windows Update you can run inside your corporation, called the Software Update Services. We use that and the thing we’re focusing on right now most deeply is to unify the distribution of security patches for all Microsoft products, and then to continue to enhance its ability to update drivers. And the key is that we’re going to need to work together to make the driver update experience reliable, making sure you always pick the right driver, and the driver is very robust when it’s installed.
So that’s the opportunity. It’s to support the availability framework we’ve built, support the diagnostics and repair, but probably most important is do the things to help drive up driver quality, with testing, with using the driver verifier, with using the feedback loop that we’ve used in development, and we’ll continue to use to drive up quality. Quality is going to be the biggest driver in being able to move forward in new software, new software is going to enable moving forward in new hardware.
So this summarizes the strategy for enabling industry innovation through our partnership. The key challenges, how do you keep innovation going? Drive up the capabilities for customers with things like DSI to drive down costs, operations costs. Provide the solutions, flexible solutions – a platform for solutions – that the can use. And to be clear, that’s our strategy. It’s about software – and always has been – that makes it cost effective to roll out these solutions. It’s not just about buying more services. Services are a part of any solution, but it’s got to be cost effective or that’s an impediment to customers.
And then lastly, frictionless deployment. I’ve talked about how key it is to make it easy for customers to move forward, to roll out new services. We invest $5 billion in R & D, bigger than any other company in a similar business. Marketing, I told you we’re going to spend $250 million on the Windows 2003 launch. Development, huge investment in Windows just in our security push where we made the decision to stop the development of Windows, because that was a thing that was causing customer dissatisfaction, $200 million on that. We spent many billions on developing each version of Windows. The Windows Server system focuses on those solutions, investing across the operating system and server applications. And lastly, developer tools and support.
What do you need to do? We need to collaborate on hardware and software architecture, you need to tell us, help us understand the trends that you see, and that you’ve driving, and we do the same, and give us feedback on how we need to be driving our software. And then of course, to create Windows-based hardware solutions. So that’s the bottom line. The thing that I’ve really got to stress is that focusing on fundamentals is so key, driving up the quality of Windows solution. We have created an ecosystem that is unmatched in the industry. I’ve been in the industry for 25 years and never seen anything like it. And the thing that’s put it at risk at times was the quality when you did something this complex and integrated it together. So focusing on fundamentals and making it high quality, flawlessly integrated from different partners is key. Designed for the Dynamic Systems Initiative, that’s the next version of the platform, the thing that will enable customers to build complex solutions and roll it out. And lastly, of course, to make your next product a Windows-based solution, so that we, you and us, Microsoft, can be successful together.
Thank you very much.