Bill Laing: WinHEC 2008 Day 2 Keynote

Remarks by Bill Laing, Corporate Vice President, Windows Server and Solutions Division
Microsoft Windows Hardware Engineering Conference 2008 (WinHEC)
Nov. 6, 2008
Los Angeles

ANNOUNCER: Ladies and gentlemen, please welcome Corporate Vice President Windows Server and Solutions Division for Microsoft Bill Laing.

BILL LAING: Good morning, and welcome. Thank you for coming to WinHEC. It’s great to be here again. I really love coming to WinHEC, spending time with the hardware community. It’s kind of my people, as somebody said.

Yesterday, you heard Jon DeVaan talk about the work that we’ve been doing on the fundamentals in the system. I’m not going to go into a lot of detail today on that because lots of that work carries over onto Windows Server, whether it’s the reliability work we’ve done, driver compatibility, performance work, and even the power management. So you also heard Steven talk about choice with Windows 7, and the investments Microsoft is making. Today I’m really going to focus on the next set of innovations for enterprise computing solutions, and I’ll talk about areas where we can, together as a community, create great enterprise solutions.

First of all, I want to say a thank you. This past year Microsoft had its largest IT launch in its history. And many of you and your companies participated in that. And you helped make it a huge success. So we owe you a big thank you for that. Our theme was, Heroes Happen Here, and it really went down incredibly well. It was focused on celebrating the people who take the technology that we jointly produce, be it hardware or software, and use it to run their businesses, or even the infrastructure throughout the world. And we got terrific feedback from that, and it was really great to spend time with those customers, and the slide up there is an example of people who we worked with to deploy.

During the last nine months since we launched Server 2008, we’ve had over 4,000 devices, and over 800 servers that have been certified for Windows Server and for Hyper-V. So some of these that you’re going to see are on stage with me this morning, it’s a little bit different from yesterday, because both on this side of the stage, and this side of the stage, we actually used forklift trucks to bring our toys with us. It’s a little bit different from the small devices you saw yesterday. The hardware systems that we have certified come from over 170 different partners worldwide. And if you want to go to WindowsServerCatalogue.com you’ll see the full listing of these there.

So in these tough economic times for everyone throughout the world, and I think we can come together as an industry, we’re really about industry standard servers, hardware and software, and really deliver an effective, cost-effective, and low cost solutions for small and large organizations around the world. People are looking to lower the cost, and it’s an important thing we can do together.

One of those organizations is the Dartmouth Hitchcock Medical Center. It’s an organization we’ve worked with. They’ve deployed Windows Server 2008 and Hyper-V. They manage their virtualization environment with System Center, Virtual Machine Manager 2008, and I’m going to show you a short video where we can listen to them talk about their success, and saving money, and they’ve moved some of their most demanding applications to a virtual environment, and significantly reduced their costs. So can you please play the video.

(Video segment.)

So if there’s one thing that I enjoy more than spending time with partners like yourselves it’s really spending time with customers like that and actually seeing what they do. It’s really a great buzz to see that what we create jointly is really being deployed in these environments, healthcare or wherever. I want to just spend a couple of minutes looking back. We’ve come an incredibly long way. When Microsoft first entered the server market we had one product. And now we serve multiple markets, and we offer 24 different server solutions.

So I’m an engineer by training and background, so I like to think of the technology coming together to be used in new and innovative ways. And I know our marketing people have a fancy phrase for this, they call it the right server for the right job. But, really what it means is the opportunity for us to deliver server solutions from the home to the largest enterprise. And we’ve also designed Server 2008 really to be a foundation for a portfolio of server solutions, and we think of them as customer-focused, where they’re targeted to particular segments of the market, where we take Windows Server and we add things to it, or we sometimes take things away, we really target specific customers. So let’s jump to one end of that spectrum, high performance computing.

At the end of September, it was I think September 22nd, we launched HPC Server, the HPC on Wall Street Conference. It was a pretty interesting time to go to Wall Street, if you remember the last weeks of September, and in fact it wasn’t clear if some of the companies were still going to be there by the time we showed up. Given the financial climate I was very surprised at the response we got.

We got a very positive response to HPC Server 2008. Our goal with HPC Server is really to take it mainstream, to make it much more broadly available to more people, at lower price points. And often it’s been a niche market, and we think we can help integrate it broadly within companies.

Next week, actually I think it’s the 17th of November, it’s about two weeks away, at Super Computing we also expect to make some more news there. So we’re very excited about that.

So let me talk about another couple of solutions, essential server solutions. Next week on the 12th we’ll be launching a single server solution, small business server, that’s an upgrade from our previous product, and a three server solution, essential business server. These solutions really present jointly, with hardware makers, and the software vendors, a really tremendous opportunity to better serve the small and mid-sized companies. If you think about the market, the small and medium-sized businesses, one factor that really amazes me is throughout the world there are 33 million small businesses, and two million mid-sized businesses. And more than 70 percent of them do not have a server. So this is an incredibly underserved market for us.

I want to also say a couple of words about Windows Home Server. Its’ been very well received. And if you go down to the Pavilion, the show floor, you’ll see some great examples of Home Server. While we’re excited about these Windows Server solutions, we’re also mindful about delivering updates to our Windows Server 2008. We want to continue making it a great solution. Last week we released to tech beta the Service Pack 2 for Windows Server 2008. It’s a standard service pack. It’s got the hot fixes and security updates have been rolled into the single update package. It also includes the released Hyper-V product, which was available for download from the middle of this summer. And we’ve also changed the default policies for really driving significant power savings. We’re really excited about our deeper understanding of how to set parameters in the system to reduce the power.

So the last time I spoke here, which was about 18 months ago, I shared Microsoft’s commitment to deliver a predictable cadence of releases, and also to give you insight into our thinking, and our plans. And in these current economic times I’ve really been reminded how important it is that that commitment is made to you, and made clearly to you, particularly around our cadence.

So just as we did with Server 2003, it was a major release, as you can see, and then we followed about two years later with an R2 release, or an update release. That was Windows Server 2003 R2. And we’re keeping that cadence with Windows Server 2008 R2. So we’ll be delivering an update release, and we’ve targeted a number of high-value feature enhancements, based on customer feedback areas where they told us they wanted improvements, and also the hardware trends that have being going on.

So though it exploits major new hardware developments, it also maintains compatibility with the hardware-base supported with Windows Server 2008, as Jon said, both on the drivers side, but also the systems that we’ll support.

Windows Server 2008 R2

At the last WinHEC I said that Windows Server 2008 would be our last 32-bit server release. And with R2 we’re completing that transition to 64-bit. So R2 will be released only in 64-bit edition. The X64 processors from AMD and Intel will be our focus for all the roles in Windows Server, and Windows Server for the Itanium processor really remains our focus for very large database workloads. So WinHEC this week, we’re distributing the pre-beta build of R2, and it’s available for both processor architectures, X64 and Itanium.

I want to talk a little bit about some of the hardware trends that drove our thinking on the R2 release, and some of the feature enhancements. And it’s been relatively constant over the last 18 months. I think at the last WinHEC I pretty much used the same slide, I think we updated the nice icons there. But, I called it a perfect storm. There were a number of hardware trends that were coming together simultaneously, the 64-bit transition, which we really believe we’re almost through now, with the price of DRAM dropping continually. If I look at Web sites even ECC memory for servers is under $50 for 4 gigabytes. Server power efficiency was on everybody’s mind 18 months ago, and even more important today. Multi-core processor architectures and virtualization, these were the trends that have driven this release.

So with it being an update release, we wanted to focus on a number of pillars and based on feedback we had from customers. And those were, streamlined management, people wanted to increase their efficiency of management, they wanted us to have a better together story for Windows 7, so that when the client and the server were together we could really improve that working together, particularly for an increasingly diverse, and distributed workforce. The other area was what we call the Enterprise Class Foundation, really bringing mainframe-style capabilities to the mainstream, with industry standard servers, so increased scalability of our platforms as you’ll see later, and of course, virtualization and consolidation for a much more dynamic data center.

So as I said, streamlined management was one of our focus areas. The automation of IT tasks can be an incredible way to reduce costs, because a great deal of investment in IT is expended on really day-to-day operational tasks. So our scripting environment, PowerShell, which has been incredibly popular, enables very powerful automation. So with R2 we’re providing hundreds of PowerShell commandlets and they’re available for administrators to automate their operations both locally on a machine and remotely, so we’ve improved the remote access capability of PowerShell.

Also based on feedback where customers wanted to run PowerShell directly on server core, which we didn’t support in Server 2008 R2, we refactored some of the .NET framework, and now we support server core, actually in IIS, ASP.net also in server core.

PowerShell is also the basis for the AV Administrator Center and we have a new user interface and we’ve built that directly on PowerShell. So that’s very much our thinking going forward. The graphical user interfaces for administration will be built on top of PowerShell command lists. So anything you can do through the UI is also available through a script.

IT administrators also were very clear that they didn’t want to have to go physically to any server to actually perform any act. So one of the changes we’ve made in R2 is that Remote Server Manager is now supported from Windows 7 or from another R2 server.

So this is not the only instance where our Windows Server 2008 R2 and Windows 7 will be better together. Yesterday you saw much of our consumer investments in Windows 7, and today I want to show you some of the innovations where we bring R2 and Windows 7 together to enable a number of very interesting enterprise scenarios.

One of the things that we see is that organizations worldwide are distributing their activities, they’re doing more and more work in branch offices, so they want to offer sufficient support to workers there by a cost-effective price because the bandwidth, though we have relatively good bandwidth in the U.S. or other parts of Europe, there are other places that’s pretty expensive to connect permanently branch offices. So IT professionals and IT network administrators really want to deliver a great end-to-end experience for users in branch offices, but also minimize the networking costs.

The other thing that’s happening is what we think of as the boundaries of work location are blurring. I know many of you, and myself, where we travel a lot, but we’re expected often to keep up with the work that we need to do back in our office, and it’s not just e-mail. We really want to be able to get at the access, the IT resources in our corporations. So it’s very important that the end user experience for this be as seamless as possible, but it’s also equally important that these machines that are either in people’s homes, or they’re traveling with on the road continue to be managed as corporate assets, and kept up to date.

Demo: Windows Server 2008 R2 and Windows 7

So let me bring Rob Williams onstage so we can walk through a number of the features where R2 and Windows 7 work really well together. So, Rob.

ROB WILLIAMS: Good morning, Bill.

BILL LAING: Hi, Rob.

ROB WILLIAMS: Thanks for inviting me out to show our audience some of the great things we’ve done in Server 2008 R2 and in Windows 7.

I want to start out by showing a demonstration of this large device here. This is a Canon MFP.

BILL LAING: Great. Another device we had to get on the stage with a forklift truck.

ROB WILLIAMS: That’s right. So this is the forklift device I’m responsible for. So we’re going to start out by scanning a contract. This is a part of a basic business process that we may have. I have a contract for you right here, if you wouldn’t mind signing it.

BILL LAING: Okay. I know this is in Japanese, and unfortunately I don’t speak Japanese.

ROB WILLIAMS: Well, I know that you can’t read it, but they’ve assured me that whatever is in that contract is appropriate for us to execute.

BILL LAING: Okay.

ROB WILLIAMS: Thank you, Bill.

BILL LAING: Thanks.

ROB WILLIAMS: So I’m going to log in and start the scan process.

BILL LAING: I noticed you’ve told me you actually logged into this scanner, it’s not a usual activity.

ROB WILLIAMS: That’s right. Well, I actually, when I logged into the scanner I logged into Active Directory, and Active Directory was able to retrieve scan processes that are relevant to me. In this case, partner agreements.

BILL LAING: So based on your credentials in Active Directory, and other information, the scanner automatically knew what to do with the document to start them on some workflow process.

ROB WILLIAMS: Exactly. Those partner agreements, what we just scanned, were taken, scanned, and then sent back to a Web service which is part of our Distributed Scan Manager. The Web service then was also given the information on what to do in the workflow, what to do with the actual documents.

BILL LAING: Is this our new role in Windows Server 2008?

ROB WILLIAMS: That’s right. We’re expanding the print role in R2 to include scan.

BILL LAING: Great.

ROB WILLIAMS: Now the content has been securely sent to the server behind stage, that server was told that we wanted this information to go to my SharePoint site for partner agreements. So why don’t we take a look at my SharePoint site and see if it’s there.

BILL LAING: Great. So this information is either going to be available to you, yourself, or potentially somebody who is now going to act based on the document that you persuaded me to sign.

ROB WILLIAMS: That’s right. So, if corporate legal wants to review or maybe undo what you and just did, they can go to my partner agreement site and pick up that document. So as we can see, here it is. I’m just going to click on it and there it is, signed by you, although I’m not sure that’s a legal signature with just your first name, Bill.

BILL LAING: Good. All right.

ROB WILLIAMS: With the new scan server role, we’re securely moving that content, we’re moving it with the device’s profile for Web services. It’s a great new role, and it’s a great way for individuals to be able to include scanned documents into the workflow processes unique for them.

BILL LAING: So you heard me talk a little earlier about branch offices. So what are we specifically doing for customers, end users, and also IT administrators for branch?

ROB WILLIAMS: Well, let me show you what we’re doing in the branch. Right now a lot of people or a lot of enterprises have branch offices. They’ve got people over in these branches that are connected with relatively low bandwidth WAN lines. And so what I want to do is, I want to show you what the experience is like for those guys that are stuck on these 512k lines trying to access the corporate network.

So here we’re going to load a Web page through an emulated 512k line. You can see how slow that is.

BILL LAING: Yes, it reminds me of 1995. So does the design of the page.

ROB WILLIAMS: The Web site itself might be a little bit 1990ish, and it feels a lot like a modem. Here’s an example of a meg-and-a-half presentation that we need to download.

BILL LAING: Right. But we’re seeing something else here on the display.

ROB WILLIAMS: Yes. And what that is on the split screen is the WAN bandwidth. So that’s money being spent, right, when the IT is paying for the WANs, they’re often paying for a metered WAN.

BILL LAING: Right.

ROB WILLIAMS: So that’s just money going away as that data is being transferred. It’s also a poor user experience. We had to wait that long.

BILL LAING: It takes about 20 seconds to move across.

ROB WILLIAMS: So let’s look at what happens with branch caching for another user in the same branch.

BILL LAING: So you’re now on a different machine, and this is a user who is completely different, has never visited this document before.

ROB WILLIAMS: That’s right. So this is a user who is getting into the office maybe second in the morning. He loads his Web page, and it’s like that, right.

BILL LAING: The speed was good, but it still looks like 1995.

ROB WILLIAMS: Yes. The branch caching won’t help Web site design especially when one of our developers is the one doing the Web page. I don’t mean our Web developers, I mean our Windows developers. So we’re going to load the same document, and when we do that you’ll see how fast it comes across, Bill. I’m going to save.

BILL LAING: So on the other half of the screen we can see we didn’t obviously use bandwidth to transfer.

ROB WILLIAMS: That’s right. So bam, and this red line is actually where we are in bandwidth. So you’ll see that there are two little blips, and that’s really just getting the hash information, then the content is transferred securely on the local network from the machine that had the content to the other machine that needed the content.

BILL LAING: So we talked about being on the road, or distributed working, people who want to work from home, or from cafes, what are we doing for those people to give them a much better access to IT resources?

ROB WILLIAMS: Well, so I work from a large  I work from a wide variety of places. I work from the cafes, as you mentioned, I work from the hotel, stuff like that. And what we’re doing for them is we’re providing a way to seamlessly access those corporate resources from on the road. So I’m going to log in real quick here. This is our Direct Access feature, and with Direct Access I’m connected to the local Internet, and I’m also connected to our own intranet at Microsoft. So if I go down and show the users here, it says Internet and corporate access, I’m sorry it’s a little small down there, but what that means is I’m connected on the show network, but I’m also securely connected back to Microsoft headquarters.

BILL LAING: You didn’t have to do anything special to do that connection; it was completely seamless and transparent to you?

ROB WILLIAMS: Exactly. It was the same user experience if I have my laptop here, or if I have my laptop on my desk in Redmond.

BILL LAING: So if I send you some e-mail and I want you to do some research for me and I want you to get to some corporate resources, you can now do that.

ROB WILLIAMS: I can access MS Library, for example, which is our internal Web site for research. And this is going to our Microsoft servers. I can access that directly from here across the network without a problem. And at the same time, while it’s contacting that, I can also access the Internet for like MSW, or MSN, thank you. And we’ll give that guy one more shot. So what it’s doing is, it’s seamlessly accessing those corporate resources, it’s doing it securely. But there’s another huge benefit of this, and that’s the benefit for IT administrators. It used to be very difficult for IT administrators to reach back and grab and manage those laptops that were roaming the world, that were outside the domain.

BILL LAING: So here we saw connecting to an internal Microsoft Web site, the Microsoft Library, by the same time this machine is now managed as if it’s part of the corporate domain network.

ROB WILLIAMS: Exactly. So they’re pulling all those machines into the same management experience, whether the machines are inside or out, but also the users are given the same experience inside or out.

BILL LAING: So another thing IT professionals and network managers tell us is their concern about security. We offer Bit Locker for complete machines, but we’ve heard a lot of feedback about how do they protect corporate data.

ROB WILLIAMS: Exactly.

BILL LAING: We’ve seen press reports on this.

ROB WILLIAMS: Right. And last night I came across this Web article that was exactly about this, where there was an individual who lost a memory stick in a parking lot of a pub in England that included information, user name and password, for 12 million taxpayers from Britain. You sound sort of British.

BILL LAING: Scottish.

ROB WILLIAMS: So your data is safe in that case. But, nonetheless, we’ve all heard these stories about lost laptops and what it means to the enterprise. It’s the IT administrator’s responsibility to secure that information. We need to give them the tools to secure memory sticks, too.

BILL LAING: Right. So let’s see what we’re doing on this machine with the memory.

ROB WILLIAMS: So coming over to this machine, in Server 2008 R2 we can use group policy to create encrypted memory sticks with Bit Locker. So when we put in the drive, the first option we can get is to go ahead and encrypt this drive, and make it available for the next user, for our users.

BILL LAING: So it’s possible to push a policy to all the machines in the enterprise that when a memory stick goes in, they can only be encrypted data that goes on them.

ROB WILLIAMS: Absolutely. And I have  while that’s being set up, we want to move back to this laptop because I have a drive that’s already set up for this policy. So if we go here, and you can just insert that drive into the 

BILL LAING: So this is a drive that I may have found, or somebody gave me that maybe they shouldn’t have given me, and we insert it in here, and you notice it’s prompting me for a pass phrase. So I’ll have a guess.

ROB WILLIAMS: Open drive.

BILL LAING: As you can see, the pass phrase I typed was incorrect.

ROB WILLIAMS: That’s right. So that’s not my pass phrase. But if I go ahead and type in my pass phrase, we can unlock this drive for use on the computer, and we can also unlock it so it always works on the computer. So now we have given the IT administrators security in a variety of different ways. I’m going to start that update. And we’ve really taken care of problems, both on the laptops and now for lost memory keys.

BILL LAING: Great. So you can see for enterprises the combination of Windows Server 2008 R2 with Windows 7, we’ve done improvements in scanning securely as part of workflow, improving the branch experience through branch caching, direct access so that wherever you are on Internet you can be securely part of the corporate network, IT professionals can also manage those machines, and securely locking down data for the flash keys.

ROB WILLIAMS: Absolutely.

BILL LAING: So, thank you, Rob. Thanks for coming and joining me today.

ROB WILLIAMS: Thanks a lot, Bill. (Applause.)

BILL LAING: So another area where we’ve been investing in is what we call the core enterprise features. Windows Server and Windows Server applications have always been designed to scale on multiple processors, but we’ve continued to invest heavily in this area, and not only on our technical side, but on our business side. We took a leadership position. We license by socket, or processor rather than by core. But we’re continuing to see that rapidly changing as people develop more and more large systems with more and more cores.

Customers are looking to save money in their largest enterprise applications and databases, and they’re increasingly being deployed in Windows Server because we have the hardware and the software at a much cheaper price point and that creates a need for much greater scalability.

Let me spend a couple of minutes here talking about the concepts of logical processors that might not be clear or obvious to anyone. The processor vendors have adopted a strategy of increasing processor performance by designing multiple processing units on each physical processor. Historically clock speed was the thing that drove performance, now we’re seeing more and more logic on the chips. And we refer to these processing units as logical processors. And logical processors can either be core-based designed, or thread-based designed.

But today Windows Server only supports up to 64 logical processors. With R2 enhancements both in the kernel and new APIs and tools and the user interface changes, we now support processor designs up to 256 logical processors. There’s going to be detailed breakout session on this later this morning. I encourage you to go if it’s an area of interest to you. There’s also a great video that Mark Russinovich did on Channel 9 where he talks about the actual engineering work we did to do this. So these servers predominantly run large database systems.

Demo: Windows Server 2008 R2 and SQL Server

Let me next introduce Quentin Clark, he’s the General Manager of SQL Server. He’s going to talk about how SQL Server is building on top of these new R2 features. (Applause.)

Hi, Quentin. Great to see you again.

QUENTIN CLARK: Good morning, Bill. Good to see you.

BILL LAING: Hi.

QUENTIN CLARK: When was the last time we were up here talking together?

BILL LAING: Well, the last time we were talking, we were doing scale out when we were working on the Application Center, and we were focused on scale out. Today we’re here to talk about scale up.

QUENTIN CLARK: How appropriate.

BILL LAING: So tell us why this new support in R2 is really important for SQL Server?

QUENTIN CLARK: Certainly, the many, many core support, SQL Server has now broken deep into the enterprise mission critical applications. The scalability that our customers require, of course, is incredibly high. And with the 64-core limitation today that there is a limitation to scale this thing to a legacy box, SQL Server. By unleashing all this additional capability we’re able to meet all the needs of these high-end applications. So I’m super-excited about being able to be able to provide that support to our customers. In our partnerships with AMD and Intel, and the server and storage vendors, we’re able to offer an incredible variety of really good solutions down to our customers. It’s really something that our customers have been demanding, and they’re relay looking forward to.

So I understand we have hardware on forklifts up on the stage, let’s go talk about them.

BILL LAING: Yes, so I’m delighted to have these scale up servers, both from HP and from IBM, on stage with us today. These servers are great examples of the partnerships, they’ve been very deep engineering partnerships we’ve done. With the IBM machine we had joint work with Intel and IBM on this system, and of course, the long, deep partnership with HP.

So the machine on the far end of the stage is an IBM 3950 M2. It’s a Xeon-based system, and it has 32 processors, and each processor has six cores, giving us 192 logical processors, and it has a half-terabyte of memory in it. Can we actually have up on the screen the console of this system?

QUENTIN CLARK: That’s a busy Task Manager. (Applause.)

BILL LAING: So this is the task manager with 192 available cores on this system. We’re pretty excited about this. So, Quentin, how is SQL going to benefit from all these cores.

QUENTIN CLARK: Let me show you, we actually have a built of SQL Server that we’ve been working on that’s slated for the Kilimanjaro release I’ll talk about in a second, running on this server, and if we can go ahead and generate some load on it, we’re first going to show this thing running up on 64 of those logical processors, and things will ramp up here pretty quickly. You’ll see within seconds we’ve got the same loaded capacity. It’s a little test application that we’ve been using in our development work. You see now it’s pretty much pegged all 64 of those logical processors. Now if we can go ahead and unleash the rest of the processors, we have a way to get that, and then we’ll drive that load, and you’ll see that we are vastly ramping up the scalability across all of the logical processors available on the box.

BILL LAING: This is terrific, it’s kind of, for me, the idea of the green in the chart shows that all this compute power is being delivered up to SQL, it’s mostly running in user mode, we’re not seeing a lot o red in the chart, which is kernel mode, because we’ve done a lot of work to improve that scalability.

QUENTIN CLARK: These are working super-tightly together to achieve this kind of result. So this is the Xeon platform, what about what we have going on with the HP system here?

BILL LAING: What about Itanium? This system here is an HP system. Itanium is supported in Windows Server 2008 R2, and we’ve really focused our support on Itanium for these large-scale database workloads. We also have other features, such as dynamic hardware partitioning, also available with this kind of system. Our great partnership with HP, as I mentioned before, has enabled us to scale up to 256 logical processors in the Itanium Edition.

So this system you see here, the two cabinets, is the 64 processors, each processor has two cores, and each core has two threads. So that gives us 64 times 2, times 2, which is 256. So 256 logical processors, it also supports up to two terabytes of main memory. So if you thought the task manager with 192 cores looked busy, let’s have a look at the superdome console. So now we’re seeing the 256 processors, but I think we need a bigger screen. Maybe we’ll ask for this one next time, right.

QUENTIN CLARK: You’ll have to make every IT administrative console abut 20-feet wide.

So on this machine, on the Itanium machine we also have the build of SQL Server running, and we have an application some load against SQL on this box, as well. So let’s go ahead and get that started, and you’ll see it ramps up pretty fast. Actually, where you’re seeing the red there is mostly updating the UI. So that goes away pretty quickly as it starts to fill out the graphics work. And now you see it ramps here in just a few seconds across all those cores.

BILL LAING: So this has been lots of joint work, both with hardware partners, between the Windows and SQL team really to optimize this. We did a lot of work and that’s, as we’ll explain in the session later today, to remove core locks in the system to get this great scalability. So what are you actually seeing, Quentin?

QUENTIN CLARK: We’re seeing about, in our current development environment, a 1.7 for this CP workload, test environment that we have. And we’re about a year away from  this is part of the Kilimanjaro release which we announced a few weeks ago at the BI conference in Seattle. Kilimanjaro release should hit about the first half of 2010, so very close to the R2 release of Windows. So we’re going to support up to 256 cores, in parity with what Windows is doing. And you can see that the work is actually going pretty well.

So the 1.7 we have another year in front of us, with the development work, we’ll see where we land. Of course, mileage varies a lot, depending on what kind of an application you’re really running, what you’re doing with the database. The databases, I/O and memory, and the CPU is always very tricky to balance. But, one of the things that’s worth noting here is how even the load is across the logical processors, and how consistent the load is on each processor. We really are in a very, very good state. We’re really happy about what’s going on here.

BILL LAING: This is really exciting work. It’s gone incredibly smoothly. We’ve had some great engineering work going on.

QUENTIN CLARK: Yes, so I think somehow we’re getting these machines off of the stage, and we may be looking for volunteers later if anyone is interested in doing that. And bringing them down to the pavilion around 1:00, I think, so that people can kind of see the systems, kind of get their hands on them a little bit, and showing off what we’ve done here. And then there is a session, a SQL Server session, I think, and a Windows session later today that’s going to go a little more into details on what we’ve done and how things are going.

BILL LAING: So thank you, Quentin. Let’s not leave it so long the next time. I don’t know what we’ll do after scale up, and scale out.

QUENTIN CLARK: No, we’ve sort of conquered both. Thank you very much.

BILL LAING: Thank you. (Applause.)

So really  this was an exciting demo for me, I’m a kind of big systems guy. I like this kind of stuff. But, it really shows what the partnership with the industry can achieve, Microsoft with Intel, HP, and IBM really came together to address these needs of customers, who are really looking for very cost-effective solutions to their big system problems. So we think it also positions us well for increasing core counts that we’ll see both on the client and the server over time.

Maximizing Server Power Efficiency

Another area that I talked about earlier, and I want to say a few more words about is maximizing server power efficiency. This is a huge topic on everyone’s mind, and we’ve really done quite exciting work in Server 2008 R2. We have some new default power setting policies, as we’ve understood more how to maximize the efficiency of the system while still delivering throughput to end users, and we introduced a new feature called core parking. And core parking is really changes to the scheduling so we use the minimum number of processors to get the work done that we need to get done, rather than trying to spread the work out over all the cores. At the same time the cores we’re not using we put into deep sleep states, which enables us to reduce the power consumption.

The R2 also automatically discovers new power instrumentation hardware, both in the metering area, and in power budgeting, and we’ve worked with our partners to do this. We also developed a DMTF (Distributed Management Task Force) compliant interface in R2 for remote management software. So people who build management software on top of Windows Server are able to possibly take a whole data center view by controlling the power and metering the power from systems. So, again, this really required close collaboration by the OS teams, and the BIOS and hardware vendors.

Another area where customers  we got a lot of feedback from customers, particularly people running large data centers, including within Microsoft, people who are virtualizing large numbers of machines, they were really faced with the problem of two image formats, VHDs, or virtual hard disks, and our WIM format. So in R2 we’re taking really the first steps in allowing customers to start to standardize on a single image format. And we’re enabling the support of servers to boot VHDs directly on hardware, as well as in a virtual environment. It’s possible to have one image format created, and then make a deployment time on whether you want that image to run on a virtual machine, or on a physical machine.

The other thing that’s happening is customers are really starting to move away from being tied to physical resources. They want to make investments where they decouple physical from virtual. And they want to get to a world where software is able to dynamically manage both the loads and the provisioning of new resources. With R2 we provide a framework for much more adaptive resource management. So R2 supports live migration of virtual machines from one machine to another, and also allowing the movement of those without any service interruption.

Another area where we’ve done important improvements in virtualization performance in particular is we support what we call second-level address translation, and again, an area we’ve also worked closely with AMD and Intel on for the support. Systems built by AMD on the Opteron series, they call this Rapid Virtualization Indexing, or RVI, and the Intel Xeon systems that are capable use extended page table support. We support both of these different technologies.

Customers as they’re consolidating, many customers have had terminal server sessions as a way of consolidating, but they’re also looking at consolidating desktops onto servers and virtual environments, this is often called the VDI, or the centralized desktop scenario. And we’ve heard from customers that they would like to have a fairly integrated system between terminal server sessions, or VDI sessions, and we also want to enable our partners to bud on this infrastructure, partners such as Citrix, who build on top of our terminal server environment.

Demo: Windows Server 2008 R2 Virtualization Features

So as part of this work, and we’re now going to call the terminal server role the Remote Desktop Services, because it’s also going to support a thing called a connection broker that connects terminal server sessions, and also hosted virtual desktop sessions. So I’d like to bring on stage Bryon Surace, who is going to show us more of the virtualization features in R2 that we’re excited about.

Brian, great to see you.

BRYON SURACE: Thank you, Bill. Good morning.

BILL LAING: Good morning.

BRYON SURACE: Thank you. (Applause.)

BILL LAING: Good. All right. So we’re going to go over here.

BRYON SURACE: Absolutely. So good morning, it’s a pleasure to be here to demonstrate both Hyper-V, as well as System Center Virtual Machine Manager. I’ll be demonstrating how these technologies combined really create a solid virtualization platform for the data center. And then in addition, towards the end of the demonstration, I’ll be giving you a sneak peek at one of the most highly anticipated features of Hyper-V coming in Windows Server 2008 R2. But, before we get to R2, let’s take a look at Hyper-V in action.

BILL LAING: So what have we got up here, we’ve got the Hyper-V manager that’s part of Server Manger?

BRYON SURACE: We do. So this is the built-in management interface that’s really designed to manage a single virtualization host computer. Here you can see we have a number of virtual machines currently running. We can bring up some of these virtual machines. Our first one is running Windows Vista Service Pack 1. Next we have Windows Server 2003, and our third Virtual Machine is running Windows Server 2008. And if we bring up system properties in this virtual machine you can see this is a 64-bit virtual machine with 6 gigabytes of ram. And if we switch over to task manager, we can see this is a four-core virtual machine. So now with 64-bit support, large memory support, up to 64 gigabytes of RAM per virtual machine, and support for up to four cores, you can see Hyper-V is really designed to handle the vast majority of enterprise-class workloads.

Now, if we look at our fourth and final virtual machine, you can see we’re actually running  

BILL LAING: It looks like Linux to me.

BRYON SURACE: It is Linux, in fact, SUSE Enterprise Linux Version 10. Our customers were very loud and clear that they want to standardize on a single virtualization platform, they don’t want one virtualization platform that runs Windows well, and a second virtualization platform that runs Linux well. They want one platform that will run all of their virtual machines, and that’s exactly what we provided in Hyper-V. So through our Linux integration components we ensure Linux not only runs, but runs well, and runs fast on Hyper-V.

So now let’s switch over and take a look at our large-scale virtualization management solutions.

BILL LAING: So this was really  what we have built into Windows Server is very focused on managing a single machine virtualization environment, and this is if I’ve got tens, or hundreds of physical machines and lots of virtual machines.

BRYON SURACE: That’s exactly right. System Center Virtual Machine Manager is designed to manage your entire data center, all your virtualization hosts, and all the virtual machines running on those hosts. So this is the management interface, and over on the left-hand side you see we’re actually managing three different virtualization platforms. Right here we’re actually managing Virtual Server 2005. The one on the bottom is Windows Server 2008 Hyper-V. And the one on the top is actually the new Microsoft Hyper-V Server, which is our bare metal stand-alone virtualization platform that’s free, and available for download on the Web.

So here in the middle you can see all our running virtual machines, and the hosts across our environment that they’re running on. Let’s take a look at just how easy it is to create a new virtual machine and put it into the mix. So we’re going to come over here and say, new virtual machine. This bring up our wizard. We’ll go ahead and say, create new virtual machine. We’ll give it a name. We’ll call it Brian VM. We’ll click next. I forgot an R in my name, I’ve got to spell my own name right. All right. Click next, this brings up the configuration option. So here we choose the settings for our virtual machine, such as BIOS, add additional processors, add additional memory, we’ll go ahead and select the default options and click next.

Now this brings up the intelligent placement wizard. This wizard is designed to look across your entire data center and help you define the best virtualization host to handle this virtual machine. So it looks for things like CPU utilization, and memory utilization to make that determination. You can see, it’s given us a few options. I’ll go ahead and select this most recommended option, the one with four stars. We’ll select it, and click next. Now I’ll walk through the next few screens of the wizard, and now here we’re presented with our final summary screen. If I click create it will automatically deploy the VM.

BILL LAING: So I’m kind of old school. I don’t like this GUI stuff. I’m a command line scripting kind of guy. What do you do for me?

BRYON SURACE: Absolutely. Well, the great thing about Virtual Machine Manager is 100 percent is written on top of the Windows PowerShell. So everything that you can do through the management interface, or through the wizard, can be scripted to a PowerShell commandlet. And right here at the end of the wizard we’re presented with the view script button. If I press this button we get a good look at the PowerShell commandlet that it’s about to execute. If I wanted to, I could copy that commandlet, put it into a larger script for large-scale deployment, or large-scale management.

So now, let’s switch gears a little bit. Now, as I mentioned, we’re going to talk about some of the new Hyper-V features coming in the R2 release.

So as Bill mentioned, Live Migration is the ability to move a running virtual machine from one physical host to another with no dropped network connections, and no perceived user downtime. So let’s take a look at it.

We’re going to go ahead and switch over, and what you can see here on the left side of the screen, we have Cluster Administrator open.

BILL LAING: This is a familiar tool people have today, Cluster Administrator.

BRYON SURACE: It is. It’s built into Windows, and people are very familiar. Now we’re doing this as part of Cluster Administrator. System Center Virtual Machine Manager interest he next version is also going to be able to instigate a Live Migration, but here we’re going to do it in Cluster Administrator. You can see we have two nodes, node number one, node number two, these are running Windows Server 2008 R2 with Hyper-V. And at the top, you can see we have a number of virtual machines that we’ve made highly available. So this means these virtual machines can now live migrate between the nodes. The virtual machine we’re going to pay particular attention to is right here, this is Windows Vista Service Pack 1, and you can see currently it’s running on node number one.

Now down here in the bottom right we have an open file share. This file share is actually from the Vista Virtual Machine that we’re about to live migrate. You can see there’s a video on it, so we’re actually going to start streaming this video. This is the Hyper-V ad where the IT pro drinks a can of Hyper-V to try to become more dynamic and more efficient. So this video is actually streaming directly from the virtual machine that we’re about to live migrate. At the same time, we have a file down here, it’s an ISO image, it’s almost three gigabytes in size. We’re going to take it from local, and we’re going to drop it onto the file share as well, and there’s the progress indicator for the file copy.

BILL LAING: So the VM that we’re looking at is playing a live video we can see, and we’re also copying three gigabytes.

BRYON SURACE: Exactly right. We’re streaming the video down, and we’re pushing the three gigabyte file up, and now let’s go ahead and kick off the Live Migration. I’ll right click, say live migrate, and here in the status indicator you can see it happening. We’re actually doing a live memory copy of the virtual machine while it’s running. There’s no dropped network connections, there’s no perceived user downtime, you’ll see there’s no glitch in the video, and there’s no stoppage in the file copy. And just like that it’s over, and we’ve live migrated this in node number two just like that. (Applause.)

BILL LAING: You want to try your luck and move it back?

BRYON SURACE: Absolutely. Let’s give it a go. I could do this all day. Let’s go ahead and push it back to node number one, so we’ll say live migrate, and here it goes. I’ve given the status indication, it’s a live memory copy of the virtual machine, the video continues to run, the file copy continues to run, we’re pushing 50 percent, and here in just a couple more seconds, 88, 90 percent, we’re back on node number one just like that.

So, in summary, Hyper-V combined with System Center Virtual Machine Manager really creates a solid virtualization for the data center. In addition, when you combine it with the new features coming in R2, it really enables the dynamic data center. And so, Bill, in order to make you even more dynamic than you are today, we have for you your very own can of Hyper-V. So thank you very much.

BILL LAING: Thank you. (Applause.)

I think I’ll stick to water. Thanks, Brian, that was great.

So this morning I talked about the momentum we’re seeing with Windows Server 2008 in the market. We’re very excited with the product. We keep getting great feedback on it. I also talked about the release of the Windows Server 2008 solutions, HPC, Small Business Server, and Essential Business Server that we’re launching on Monday next week. I also reaffirmed our cadence of releases, the next release of Windows Server is an R2 release, just like we did with Server 2003 R2. I talked a little bit about the hardware trends that drove our design of R2, and other influences we have. So I wanted to share our thinking and our planning as we go forward.

So what do I ask of you next, what would I like you to do next? So in addition to the call to action that both Jon DeVaan and Steven Sinofsky had for you yesterday, they’re really mostly the same for Windows Server. We’d like you to evaluate it, run some of the tools against it. Our goal is application and driver compatibility as much as possible between Server 2008. I want to also flag the 64-bit. I think we’re getting great coverage now with drivers, but from servers to 64-bit drivers are the future for us, so continue working on those drivers, and make sure you have coverage for all your devices. And if you’re a system vendor or a solution provider help your customers plan that transition to 64-bit.

I also encourage you to attend the great sessions on enterprise computing, the rest of today, to learn more about R2. Go to see the great demos down in the pavilion. There’s also another  we’re actually not going to manage to move the HP machine, but we have another one down there also. You also have copies of the pre-beta, so take it home and evaluate it, and test it with different scenarios.

So thank you very much for coming. You’ve been a great audience, and I look forward to coming to WinHEC next time. Thank you. (Applause.)