Brad Anderson: Microsoft Management Summit 2008

Comments by Brad Anderson, General Manager, Management and Services Division, Microsoft
Las Vegas
April 30, 2008

ANNOUNCER: Ladies and gentlemen, please welcome Brad Anderson, general manager, Microsoft Corporation. (Music, applause.)

BRAD ANDERSON: Hey! Welcome to day two. I’ve got to admit as we were watching many of you come in, and there were a few people dragging, you know, phenomenal attendance at the event this year. Give you a couple of data points, we have over 4,000 people at the event this year, up from 3,000 last year. We even had to shut down the Web site where people could enroll four weeks ago. So we have had a phenomenal amount of turnout. On behalf of all of Microsoft, I want to really express our gratitude to all of you for coming and spending a week here away from your families and work. You know, we put a lot of time and effort into the technology that we build, and we’re just excited to show you what we’ve been working on.

Yesterday, Bob focused on dynamic IT and how it applies in the data center. Today, we’re going to talk about dynamic IT in context of the client. We’re going to talk to you about what we call the dynamic desktop. And let’s start talking about this — talking about some of the significant trends that exist out in the world right now. We’ll talk about how that is driving change in the way that IT is being asked to deliver services to users. But first, we’ll talk about mobility. You know, we’ve talked about mobility for many years, but now I like to refer to this as extreme mobility. Bandwidth is truly becoming ubiquitous. Laptops are outselling desktops for the first time in history. There are 3.3 billion cell phones in use around the world. That’s over half of the world’s population now is carrying a cell phone.

Let’s look at the demographic changes in the work force. In the United States, there are 80 million baby boomers set to exit the work force in the next ten years. The projections are that by 2012, one third of the work force will be made up of individuals from generation Y.

Let’s talk about Gen Y for a minute. How many of you have teenagers? Okay, I’ve got three teenage girls, which is a problem in and of itself. But, you know, we were watching American Idol last week and I looked over and there sat my three teenage girls watching American Idol all with their laptops, communicating on their social networking sites, and they all had their cell phones in the other hand. I don’t know — are they talking on the cell phone? No, they’re texting. Like to four, five, six people simultaneously. You know, thank heavens for unlimited text programs.

But this is a generation that has grown up with technology. It’s a lifestyle for them. They’re bringing that lifestyle to work, and in doing that, they’re changing the way in which the workplace is also working. Certainly, regulatory compliances are going to continue to grow, right? They’re going to get more complex and there’s going to be more requirements placed on the business and IT to ensure that the compliance regulations are met.

So the challenge here is how does IT deliver on these growing demands of the user while keeping their costs at a predictive level and understanding those diverse and growing set of user scenarios and making sure that we’re complying across all of those. So let’s talk about the user scenarios for a minute.

If you think historically about where the genesis of desktop management started at, it was all about delivering a set of applications down to a well-connected desktop, and we literally assume that there was a one-to-one mapping of that desktop to a user, you know, they were effectively chained together. The technology progressed. As laptops become more popular, we enabled connected, disconnected, understand, you know, technology could help as you were in a slow connection, those types of things. But we largely still assumed that one-to-one connection between the laptop and the user.

Well, things are dramatically expanding now. The mobile user really is no longer going to be defined as a user who carries a laptop with them. In fact, you can almost start to define the mobile user by what they decide to leave home. Users want to have access to applications and data that they require to be productive on a multitude of devices, on all their devices. One of the most complicated things historically we’ve had to deal with is how do you deliver a rich set of applications and the data that users need, that contracts need, individuals who aren’t even employees of your company on non-company, unmanaged, and assumed to be insecure assets. These are the things that we’re looking at, so IT is faced with these decisions. How do I use this technology? Centralized? Decentralized? Connected? Disconnected? Managed? Unmanaged? There’s a set of choices now that sit in front of IT that helps them resolve these end-user demands and deliver on them, but can it be done in a consistent and predictive and cost-efficient way?

We submit the answer to that is: Yes. And we call this the Dynamic Desktop. The Dynamic Desktop is a solution that truly understands the intent of the user, understands the context of the user. The Dynamic Desktop is intelligent, it’s adaptive, it’s able to real-time working environment of the user and then adapt to deliver the applications and the data down to the most appropriate and secure manner.

What this really comes down to is we really need to change the model so that the device is no longer at the center of how we think about delivering the client. The client is no longer the device, the client is the user. Now, this may sound subtle, it may sound very subtle, but this is a change of how we then build the policies. IT needs to be able to build a policy that looks like the following: We’re going to deliver this client server application to all users in the sales department.

If the user’s on a corporate asset, we’re going to use application virtualization to deliver that application, and we’re going to have it run locally in a client-server mode. But when the user goes to launch the application, let’s do some real-time discovery of the working environment, understand what kind of bandwidth they have. If they don’t have fast bandwidth, let’s automatically launch the application in a thin client server mode so the user has that great performance, but it’s a seamless experience for the user. If there are contractors working in the sales organization, we’re obviously concerned about data leakage and security, we’re going to give them the data and the applications they need, but we’re going to do it through a hosted desktop in our data center so we have more tight control about that.

This is the paradigm we need to move to, and this is where we’re making our investments at. We talk about this putting the user at the center of the model, it really comes down to what we call user-centric computing, enabling that user to be productive across the devices they want to be used, how they want to work, when they want to work, where they want to work.

So as we think about this in the context of dynamic IT, the framework that we’re going to use for the next hour as we’re going through the rest of this presentation is we’re going to talk about the fore core areas of investment in dynamic IT — user-focused, unified and virtualized, process-led, model-driven, and service-enabled.

We’re going to walk you through the investments that we’re making, technologies that are either available today or will be available shortly, as well as some future looking technologies, and talk to you about how this truly enables user-centric computing.

Let’s start with the user. So we talked about these different user scenarios: connected, disconnected, on a corporate device, on a non-corporate device, mobile, stationary. I represented these as if they’re five distinct things, but in reality, many of us transition across these user scenarios every day.

Let’s just use me as an example. I work up in the morning — you know, for me it’s important to have breakfast with my children. So the first thing I do when I come out is I start doing work on my laptop while the kids are getting ready, and then we have breakfast together. But I’m on my laptop, and I’m a mobile user on my laptop. As I’m driving to work, what am I doing? Okay — I’ve got my cell phone, right? I know many of us don’t admit it, but I’m doing e-mail on my cell phone as I drive into work. Come on, you do it as well.

Okay, I’m staying productive, now I’m using a different device. I get into the office, you know, I dock my laptop, but when I’m actually in my office, I’m using a desktop. I’ve got my dual monitors, you know, my full-size keyboard, so I’m more like a traditional office worker. Throughout the day, I’m in meetings, I have my laptop with me, then I go to drive home and what am I doing again? I’m doing e-mail on my cell phone. My team says they can always tell when I get in the car because they start getting e-mails that have my smartphone signature on it.

After we get the kids to bed, what do I do? Well, my wife and I go into our offices. She’s doing her e-mail, and I’m catching up on the day’s work, but I’m on my own personal PC in my office, so now I’m not even using a corporate device. I want to be productive across all those scenarios, and I want IT to deliver down to me all my applications and the data that I need to be productive across all of those.

So let’s just dive into a couple of these scenarios in a little more detail, then we’re going to give you some demonstrations of how we can actually do that today. So for the traditional mobile user with a laptop, what this is really all about is how do you get the applications delivered down to the laptop and the data delivered down to that laptop in a secure and a predictive manner? Now, many of you are thinking, you know, Brad, we do this today. You know, we have lots and lots of laptop users, what’s different about what you’re talking about here?

There are two significant differences: First of all, the options that you now have available to you with which to deploy the application has substantially expanded. Application virtualization, I would submit, is one of the key things to enable this, and really that’s a very young and a very fresh market, and yet we’ve got the industry leading capabilities. And we’re going to show you that as we go on today.

We’ve also made significant changes and significant enhancements in the offline files and folders capability insider of Windows Vista. Not only can we now make sure that your files and folders are secure, protected, backed up, but we’ll also do things like track your settings, your wallpaper, those types of things. And you’ll see that as I transition across different devices, that information is all stored for me and brought down for me as I go across the different devices that I work in. We’ve also made significant enhancements in how we encrypt the data on that laptop using the bitlocker capabilities inside of Windows Vista.

So now with full confidence, I can deploy out quickly technologies that enable application, my data protection — if for some reason the laptop or the desktop gets lost or stolen, of which there were 750,000 laptops in the U.S. last year stolen or lost, the data is encrypted, it’s protected, and because I am using virtualization, I truly have this layer of isolation between the applications and the data that allows me to rapidly redeploy that on a new device as it becomes available. That’s actually one of the number-one concerns we hear from customers is, you know, things happen, laptops get damaged, stolen, lost, natural replacement. How long does it take me to get my users productive again when that happens?

And we do that using the MDOP capabilities, System Center, and Windows. Let’s expand that a little bit and let’s expand that out to a hosted environment where I’ve got a contractor not on a corporate asset, I want to protect myself from a multitude of things. The key thing is it’s the same technology you use on your laptops and your desktops, we’re just delivering that application, that data, to a centralized host. But it’s the same infrastructure, it’s the same technology, it’s the same paradigm that you use across all these different working environments.

Obviously, we’ve got a strong partnership here with Citrix that helps us to deliver that rich client experience down to that end user in this hosted environment. Okay, so enough talk. Let’s actually show you some of this in action and show you how technology — what we’re going to show you the technology is either available today or will be available in the next couple of months to deliver this consistent experience across a multitude of devices to really enable dynamic desktop.

We’re going to invite Edwin to come out and show us that, Edwin. (Applause.)

PARTICIPANT: Hi, Brad.

BRAD ANDERSON: Hey, welcome.

PARTICIPANT: Thanks. When we’ve talked to our users, they tell us that they run into issues such as the inconsistent user interface, the problems with the lost or stolen laptops, or even just the problem — the inability to access all their files and their folders when they go on the road.

So when we talk about the Dynamic Desktop and the user focus, we’re talking about providing that consistent user interface. So, for example, we’ve got a laptop here. It’s set up for you. It’s got your login, we’ll go ahead and log in. And as we log in, what we’ll see is we’ll see this laptop’s been customized very much for you. It’s got your wallpaper, it’s got your files, and it’s got applications.

Here we actually have Office PowerPoint 2007 that’s been delivered through System Center Configuration Manager 2007 R2, and Microsoft Application Virtualization. We also have your files and your folders. So we’re using System Center products to deliver the applications, we’re also centrally managing your wallpaper, your desktop files, your document files through offline files and folders, and folder redirection that’s native within Windows.

So let’s go ahead and open up this file right now. And what we’ll see is we’re using Microsoft Application Virtualization to deploy that file, launch that program, and actually load the file up.

BRAD ANDERSON: Well, now, one comment I want to make here. You know, application virtualization provides some significant value because it isolates the application, prevents application to application compatibility issues. But did you notice how quickly that application was installed on that laptop? One of the great benefits of application virtualization is effectively the installation becomes an X copy of a set of files onto your laptop or to your desktop.

In the case here, you know, you can literally deploy an application like PowerPoint in 10 to 15 seconds, much, much better than using the traditional MSI method.

PARTICIPANT: Yeah, this really is dynamic application delivery, and we’re using those virtualization layers to try to separate the states and give the administrator the flexibility to deliver the dynamic desktop across the systems using a unifed system center architecture.

So what we’ll do here is we’ll actually edit the document. This is Bob’s presentation from yesterday. And we’ll give you the promotion.

BRAD ANDERSON: I’d actually be satisfied with vice president — you don’t have to put the “senior” word in front of that.

PARTICIPANT: Well, we’ll go ahead and keep that.

BRAD ANDERSON: Okay.

PARTICIPANT: And then we go ahead and save the files. Now, Brad, when you go on the road and somebody asks you to go ahead and look at a presentation or make a quick change and you don’t have your laptop, what do you do?

BRAD ANDERSON: You know, this has always been a personal struggle, you know, like many of us, I’m a workaholic, and I’ve always taken my laptop with me on vacation which, you know, kind of presents its own set of conversations with my wife. But I’ve always wanted to have that safety net there because, inevitably, something happens and I need to go off and do something. It might be something quick or long. But I’ve always carried my laptop to be able to do that. But, you know, the last time we went on vacation and spring break, I actually left my laptop home. And using the technologies here that we have available, you know, when the call did come in, and it came, I just logged in using the PC at the hotel or at my brother-in-law’s house, but I was able to get all my work done. So show how we can do that.

PARTICIPANT: Yeah, I mean, that’s a situation we all want to be in. We don’t want to have to carry the laptop around, but we still want to get work done. So often we can find a computer, we can even find high-speed Internet, but it’s not exactly what we need. So maybe you’re on vacation, you’re at your in-laws, and you need a file changed. So what we can do is we can actually use Windows Server 2008 Terminal Services Gateway to access our applications.

So here we are. We’re on a standard Vista PC, probably a home PC, we’ll go ahead and launch and go into a terminal services gateway Web site. We’ll log in with your credentials. And once we’ve logged in, we’re given a series of options, including the option to do a remote desktop. And what we’re gonna do here is we’re actually going to roll back into a terminal server that is sitting on our corporate site. We’ll set some of the settings for our broadband connection, and we’ll actually go ahead and log in.

BRAD ANDERSON: So what’s actually happening here is this just establishing a remote session to my laptop?

PARTICIPANT: No. Actually, what we’re doing is we’re establishing a session back to a terminal server back at corporate. And in the background, a terminal services session is spun up, but then using the same infrastructure as we did on the laptop, using System Center and Windows and redirection and application virtualization, we’ll actually get the same desktop as we had on the laptop, but we’ll get it on the terminal server.

BRAD ANDERSON: So it provides me that consistent working experience and it brings all of my data and all of my applications — so you made a change in that presentation —

PARTICIPANT: Right.

BRAD ANDERSON: Can we go take a look to see if that change is on this terminal server?

PARTICIPANT: Well, let’s go ahead and open that up. And, again, using that same infrastructure, we can deliver that user-focused, dynamic desktop, now from our laptop onto our terminal server. So let’s see if that promotion took. And there we are.

BRAD ANDERSON: That’s good news.

PARTICIPANT: Promotion took. (Applause.)

BRAD ANDERSON: So, again, leveraging the capabilities of application virtualization, the offline files and folders capability, terminal server and our partnership with Citrix, you have that ability to get that consistent experience.

Now, let’s extend this out. Let’s talk about a contractor scenario where, you know, it’s not an employee, we want to make sure we protect against information leakage and other things, how can we enable this in a hosted desktop type of environment?

PARTICIPANT: Well, a hosted desktop environment — many call VDI or virtual desktop infrastructure is a way to run a desktop on a hosted virtual machine, probably in a Hyper-V server, in a central location. So what we can do is we can, as administrators, give access to users. A good example would be that contract or offshore worker. Maybe they need administrator-level access specifically to a desktop, and we can deliver that through a virtual machine.

Now, what we’ll actually do here is open up a Web page and we’ll connect to a Zen desktop server. Now, Zen Desktop, which from our partner Citrix, which integrates with System Center actually will route our request for our virtual machine and send us to that desktop. So once I’ve logged in, we get an icon that says, “VDI Desktop.” We simply click on that desktop. And as we click on that desktop, we’re actually routed to a Hyper-V server that’s running a Vista desktop. And because it’s, again, manages the same infrastructure, the same System Center infrastructure and Windows infrastructure, we’ll get our files, we’ll get our folders, and we’ll get that wallpaper back again.

BRAD ANDERSON: Yeah, one thing to point out there, you also get the single sign-on capability. So I was asked my user name and password once, and Zen automatically logs me in. There’s my wallpaper, there’s my application, there’s my data following me across the devices I want to work on.

PARTICIPANT: Yeah, manageability is really the key in all of this. Having the single infrastructure to be able to deliver the user-focused, managed desktop to the mobile worker, whether they’re in the office or out of the office, connected or disconnected, or using a managed device or an unmanaged device.

So we’ll just simply log off, save our settings, and we’ll be back to where we were before.

BRAD ANDERSON: All right, fantastic, Edwin.

PARTICIPANT: Thank you, Brad.

BRAD ANDERSON: Let’s give him a hand. (Applause.)

So this concept of having the ability to have a single infrastructure, a single set of solutions that you use to manage all the ways that your users need to access and want to access data and applications is key to keeping your cost down, key to you having a single solution to manage that compliance and to manage that complexity, but the end user sees one consistent experience.

I’ll tell you, one comment here. One common question I get when people find out I have five children is how do I keep work life balanced. And just for the record, I married out of my league.

So real quickly, let’s talk about two significant sets of announcements that we’re making here at the show. First of all, we’re making the announcement that Configuration Manager 2007 service pack 2 will be available and will be released in May. In that service pack, there are two key sets of new functionality I think all of you are going to be excited about.

The first set is we’ve done a lot of work with our partner, Intel, on enabling the V-Pro technology to be a first-class citizen inside of Configuration Manager. It enables you to do a set of scenarios that, in the past, would have required a visit to the desktop. You can now do that centrally. We’re going to give you a demonstration of that in a couple of minutes, but a really fundamental way to drive down your cost and enable scenarios that in the past were not possible.

The other thing that we’ve done is we’ve improved on the asset intelligence capabilities and inside of that SP2, two years ago at this event, we announced the acquisition of a company called Asset Metrics that gave us some intelligent asset management capabilities. What we’ve done now in service pack 2 is we’ve established the connection between the Configuration Manager database and the service that’s running in the cloud, so as that new data becomes available, new applications are fingerprinted, that data automatically flows out into your configuration management database and it’s kept updated and fresh with the latest things that have come out. So significant value inside of that service pack.

We’re also announcing the release of the release candidate for the R2 release will be available in July. And in that release is where you’ll be enabled to do things that we just showed. You’ll have application virtualization literally deeply integrated into configuration manager. When you go to publish an application, you’re asked do you want to do a standard application, do you want to do a virtualized application? And it’s just seamlessly integrated so you have this nice integration and a consistent way of delivering your applications from an administrative standpoint, whether you do that virtualized or whether you do that using an MSI.

We’ve also moved all of our reporting services inside of configuration to SQL reporting services. Okay? One of the most consistent requests we’ve had from all of you, we’ve also done some integration with the Forefront client security product so that within config manager how you can run reports that show you your compliance on patches as well as the signature levels of your host protection capabilities through Forefront. So a significant set of releases in that R2 release. And one thing I do want to point out, you know, the team has done a phenomenal job to turn around a release in less than a year of Configuration Manager 2007 coming out. So hats off to the agility that the team has enabled.

The other thing that we’re announcing is there’ll be a new version of the desktop optimization pack available in Q3 of this year. In that is where you’ll see the new version of Microsoft Application Virtualization, the technology acquired from

Softricity. You’ll see the new version of that be released. Significant improvements, you know, and this is truly the version of application virtualization that we just think is just going to go incredibly big.

One thing I’ll point out here, the desktop optimization pack is the fastest-selling V1 product in the history of Microsoft’s volume license. Just let that sink in for a minute, the adoption and the rate at which customers are embracing this set of technologies and offerings has been phenomenal. To become the number-one — they’re the fastest-selling V1 product in the history of our volume license, this is saying quite a bit about the value in there.

The other two things that’ll be updated inside of that optimization pack are the diagnostic and recovery tool. So there’ll be new versions of that release, as well as a new version of the desktop error monitoring capability. Significant value there. And what you’ll see with us with the desktop optimization pack is we plan to release updates to that every six months. This is a subscription purchase, and every six months you’ll see new value coming out with us in that offering.

Now, I thought it would be great for us to actually see a customer who’s using these technologies together today. The State of Indiana is doing remarkable things in how they’re using Configuration Manager and Application Virtualization together. Let’s take a look at what they’re doing.

(Video segment.)

BRAD ANDERSON: Great savings, great value, great capabilities to enable the things we’ve been talking about.

So let’s now transition to the second area of investment inside of Dynamic IT, and we call it Unifed and Virtualized. So let’s talk about traditionally how the Windows desktop has worked. As we’ve thought about the Windows desktop historically, we’ve thought about the hardware, the operating system, the applications and data as really the set of things that’s been glued together that really was not isolated, was not separated, and therefore really — you know, you were really chained to a specific device.

Well, we’ve been making specific investments in Microsoft, specifically in the areas of virtualization. And what virtualization enables us to do is to create that separation between the layers of the desktop, create that separation between the operating system and the hardware. Create that separation between the applications and the operating system, and that separation is what enables the flexibility that you require, the flexibility to do things like deploy new applications and deploy applications in a way that you’re confident they’re not going to damage another application nor the operating system. That’s the value of application virtualization.

Upgrade your operating system from one version to another, and even though you may have some compatibility issues, virtualization helps you to still make that upgrade and give the user a seamless experience. Upgrade your hardware, replace hardware. By creating this separation, virtualization enables us to do things like rapidly switch out the hardware or the operating system or deploy new applications.

So let’s talk just for a minute about the breadth of offerings and the breadth of products that we have in virtualization. And I’ll make the comment right here that I believe Microsoft has the broadest and richest set of virtualization capabilities on the desktop, bar none. This is the most complete set. And part of our job is to build the technologies in a way that allows you to leverage all of these virtualization capabilities and use them where they’re appropriate but, again, using one consistent way to do it.

Now, we’ve made significant investments in these areas. If you look at, for example, presentation virtualization. We just completed the acquisition of a company called Calista. Calista gives us some improved capabilities in a graphics-rich environment for Windows Terminal Server. Application virtualization, we’re about to release a new release of the application virtualization capabilities that we acquired from Softricity. In the desktop virtualization, we have virtual PC, but you’ve seen our intention, we’ve announced our intention to purchase a little company called — why did I go blank here? Kedaro (Ph.) thank you. I guess the stress just got to be too much. We’ve announced our intention to acquire a company called Kedaro. And what Kedaro does is it builds on the virtual PC capabilities to make a seamless experience for the end user.

Now, the acquisition is not completed. We expect that to compete in May. What I would encourage all of you to do is, one, attend one of the MDOP sessions that is going to be presented. Also, we have individuals from Kedaro who will be in the technology lab who can give you a demonstration of the power that Kedaro brings and the simplicity that it enables for end users on that desktop.

Now, we talk about these as if they’re separate, but in reality, the dynamic desktop is going to use a combination of these. We can see a combination where people are using virtual PC and application virtualization together, create those separation between the different layers of the operating system. But at System Center, that is at the heart of managing all of that and in delivering that consistent experience.

Now, let’s just talk for a minute about what I believe a comprehensive management solution is. We’ve talked about the changes that virtualization is bringing. We’ve talked about the changes that end users are bringing to the workplace. When I think about a comprehensive management solution, you have to have a solution that manages the entire stack from the physical to the virtual world. You need a single set of capabilities and a single set of tools that manage both the physical and the virtual in a consistent manner. The virtualized world isn’t a separate world from the physical world. Many vendors will tell you you should deploy a different set of technologies, different set of tools.

I’ve never spoken with a customer that’s told me they want to deploy more infrastructure. So as you look at your management technologies, look for a solution that allows you to have that consistent experience from the physical to the virtual and then also look for tools that give you the ability to manage the hardware through the operating system, through the applications to the data. The comprehensive solution enables both.

I just want to dwell for a minute on some of the work that we’ve done at the hardware layer. We’ve made significant investments across our partners and worked with a number of partners to enable things from the hardware management perspective that were never possible before. We’ve done with with IBM and Dell and H.P. They actually now deliver their server updates for their hardware, for their bios and their firmware, in a way that is automatically consumed through System Center and easily distributed. Dell’s extended that out to also do that on the desktop.

We’ve done a lot of work with the innovation that Intel has done in V-Pro. And V-Pro enables a number of scenarios that before you had to actually dispatch somebody to the desktop to address some of these issues. And we all know that when you actually have to make a desk-side visit, your costs go up by a factor of 10.

So I’ll actually show you some of this in action. And what we’re going to do is we’re going to invite Dave Randall up to give us a demonstration of the integration that we’ve done with Intel with configuration and V-Pro. David. (Applause.)

DAVID RANDALL: Thanks, Brad. Let’s go take a look.

Now, you talked about how System Center tools manage from the hardware to the operating system, the application and all the way up to the data. And what I’d like to focus on today is the hardware aspect. With Configuration Manager 2007 SP-1 and Intel Active Management Technology Integration, we have the ability to get a direct connection down to the hardware and manage from that level. This is something that’s never been possible before with Configuration Manager.

In fact, we’ve enabled 15 new management scenarios through this technology. This will help our customers, like you said, remove a lot of their on-site desk visits. Now Configuration Manager is unique in that we actually manage the entire setup and configuration process of the hardware from within Configuration Manager itself. One of those 15 scenarios that I’d like to talk about specifically is power control.

Now, quite simply, power control is the ability to turn on, turn off, or restart a machine remotely. Now, out in the ComNet area, we’ve got a bunch of V-Pro systems, and we’ve got a remote view to those systems, and I’d like to demonstrate the power control for those.

Since nobody’s using those systems, we’re safe to do any power control operations — (Laughter, jeers.)

BRAD ANDERSON: Okay, so with a power control, can you actually shut down Rodney’s PC from here?

DAVID RANDALL: Yeah.

BRAD ANDERSON: All right, let’s do it.

DAVID RANDALL: Let’s do it. Let’s do it. All right. So I’m going to choose that machine. I’m going to go to out-of-band management, power control. I’m going to choose “power off” and okay and confirm and we should see his system shut down.

BRAD ANDERSON: That’s too good.

DAVID RANDALL: All right. That’s good.

BRAD ANDERSON: Let’s see what Rodney does now.

DAVID RANDALL: Okay. So he thinks he’s going to outsmart and go to the next one.

BRAD ANDERSON: All right. Let’s actually show them the power of this, right.

DAVID RANDALL: All right. Let’s take it to the next level.

BRAD ANDERSON: Can you shut down the whole table?

DAVID RANDALL: You guys ready to go to the next level?

BRAD ANDERSON: Yeah, let’s shut down the whole table.

DAVID RANDALL: The whole table?

BRAD ANDERSON: Yeah.

DAVID RANDALL: Whole table. All right, here we go. The whole table. I’m going to select all these machines, out-of-band management, power control, power off, okay, confirm, and in about one, two, three — we should see them go dark. (Laughter, applause.) Outstanding.

BRAD ANDERSON: Wow, that was good. All right. All right, so, you know, working when the computer on is one thing. All right, one of the things that V-Pro allows us to do is actually manage that computer when it’s powered off. Let’s show what we can do there.

DAVID RANDALL: Absolutely. So I’m going to bring up the out-of-band management console. I’ll do this for one of the systems that’s out in my network. And while that’s coming up, let’s talk about some of the features of the out-of-band management console. This is going to get a direct connection to the hardware using Kerberos and Active Directory integration. That means you don’t need to maintain a separate list of passwords in the hardware for all of your technicians to get access to it.

Additionally, we’re using TLS. So that means the connection down to the device is absolutely secure and encrypted. Now that we’ve got the system shown here, let’s set up a little scenario. Let’s say that a user has called their help desk and they said every time they boot their computer there’s this goofy little spinning thing, and it tries to connect to the network, and I don’t like it, and can you make it go away.

So using the power of our out-of-band management console, even when the system is powered off, you can see it’s currently in a powered off state. We can go connect to that device, make the changes, save them, and exit. So let’s go do that.

BRAD ANDERSON: And historically, this would have required a visit to the desktop?

DAVID RANDALL: Absolutely.

BRAD ANDERSON: This is not something that you could have done centralized. So one of the great innovations that Intel has delivered through this V-Pro technology is the interface is the ability for management tools like Config Manager to do things that in the past weren’t possible from a central standpoint.

DAVID RANDALL: So I’ve changed over to the power control option. I’m going to choose bios. This will force the machine to boot up into bios on power up. I’ll choose power on and go to my serial connection window.

From within the serial connection, we’ll watch and control that machine. And on the other screen, you’ll actually see the physical machine. Now, remember, I’m only using the keyboard and mouse on the out-of-band management console, but you can see simultaneously the two machines coming up, live view on one side, our console on the other.

So our task here is to change the bios configuration and reorder the network boot. I’m going to go ahead and choose English. We’ll navigate over to the storage option and go down to boot order. Sure enough, network controller is up at the top of the list. I can select network controller, move it to the bottom of the list, save these settings, save them once again, and now when the end user comes back to their desktop, it’ll be booting normally like they would expect.

BRAD ANDERSON: So that’s pretty cool. So we’re actually getting a remote control session of this PC even before it’s booted in the operating system. So the ability now to make that change from a central location I think is incredibly valuable. (Applause.)

But wait, there’s more. One of the things I also commonly hear from customers is, you know, what do you do in the situation where for some reason, I don’t know if it’s hardware, don’t know if it’s software, the operating system just won’t boot, but there’s data on that hard drive that I need to recover.

DAVID RANDALL: So we can take care of that same system with the out-of-band management console. On the same power control page, I can choose IDE redirection. Now, this would allow us to boot off a network file. And from that network file, we could actually do remote diagnostics and trouble shooting. We could do repairs. We could copy that user’s data off to a network location. We could even kick off an OS deployment task sequence from within Windows PE to rebuild that image if it had been — if it needed to be repaired, and then later copy that data back off for the user.

BRAD ANDERSON: And I could do all of that without ever having to physically visit the desktop?

DAVID RANDALL: You won’t even have to go close to it.

BRAD ANDERSON: That’s great.

DAVID RANDALL: So using the power of Configuration Manager and Intel Active Management Technology, you now have a set of tools that allow you to reach much deeper into your help desk and troubleshooting scenarios to save an awful lot of money on all of your IT budget.

BRAD ANDERSON: All right. Thanks, David.

DAVID RANDALL: Thank you.

BRAD ANDERSON: Let’s give him a hand. (Applause.)

You know, we’ve done a lot of work with Intel, and I think one of the really unique benefits that we’ve brought with Configuration Manager is we brought a provisioning and security integration piece with Active Directory so that Active Directory is what manages access to all the V-Pro technology across your enterprise. So I’m really proud of the work that we’ve done there.

Let’s now talk about the third area of the technology investment as a part of Dynamic IT. We talked about user focused, we talk about unified and virtualized. Now I want to talk about model-driven and process led. Bob mentioned yesterday that, you know, we started on this vision five years ago we called DSI and we’ve expanded out to include a little broader set of capabilities, and now we call it Dynamic IT.

But models have always been at the center of what we’ve done. We’ve worked across the industry to do things like make the progress from an industry standard like SML and we’re actively involved in the CML working group as well. But models are core to everything that we do. As we think about the technology and how it progresses forward, the more knowledge that we can deliver to you in the form of models that help you understand things like what’s the health of the network, is it properly configured, what are the controls from a regulatory compliance that I need to be aware of and be able to verify.

What we’re doing with our management tools is we’re capturing this knowledge and delivering it in the form of models. We’re doing this today. If you think about operations manager is, the management pack is a health and capacity model. So you’re already using model-driven management today. With Configuration Manager we deliver what we call configuration packs. Configuration packs contain configuration policies, regulatory compliance policies.

We’ve just released over 65 different configuration packs which range everything from what a secure desktop should look like to what the controls are that you need to be aware of for regulatory compliances like SOX and HIPPA. All that technology is being delivered to you, all that knowledge is being delivered to you in the form of a model.

As we think about service manager coming out, you know, we announced that we’re going to be building a service manager last year at MMS. The service manager will include solution packs. And a solution pack is a model that does things like your business processes. And wrapping all of this, we know that product is only one portion of what you have to do in your organization to be efficient and to be successful. You have to understand the policies and the people and the product and how all of that comes together.

You know, at Microsoft we’re big believers in IDL. We’re big believers in IDL, and what we take is we take the IDL methodology and those best practices and we capture that in what we call the Microsoft Operations Framework, which is a set of guidances about people and technology and processes of IDL methodology about how you would deploy and use Microsoft technologies in your environment.

And we’re announcing today that Microsoft Operation Framework version four has been released out to the Web. So I’d highly encourage you to go take a look at that. We have a set of technologies, we have a set of guidance and best practices there that truly will help you understand how our technologies are best utilized and integrated with your people and integrated with your policies using IDL methodology.

Now, let’s talk a little bit about some of the additional work that we’ve done around models. One of the things that we’ve done within Windows Server 2008 in combination with Configuration Manager and other technologies is enabled a model-driven solution where we can actually verify that your laptops, your desktops, and servers are properly configured, have the appropriate patches in place, have the appropriate host protection pieces in place before we give them access to the network. One of the key things inside of Windows Server 2008 is what we call the network access protection capabilities that will protect your network against bad things coming in from your desktop and laptops. And we’re going to show you how we do that through model-driven management, and let’s invite Bill Anderson out on stage to do that with us. Bill.

BILL ANDERSON: Mr. Anderson.

BRAD ANDERSON: Mr. Anderson.

BILL ANDERSON: I thought about doing my fly-in Neo thing, but no strong cables, so —

BRAD ANDERSON: Seems you’ve been living two lives.

BILL ANDERSON: Exactly. So as Brad said, you know, being able to be model-driven and process-led is important for us. But it’s also important for that user-centric approach to continue to make sure we deliver both. Sounds a little mutually exclusive, but the interesting thing is that disciplines like security and asset don’t go away. We just need to think of a different way to approach them. We need to think about using policies as a bit of a guardrail to protect us from areas where we might be at risk.

So I’m going to take those technologies that brad walked us through, which is Server 2008 and Configuration Manager and show you some simple ways that we can put those policies in place to allow our users to be successful but still protect our system.

Now, I don’t know about you, but a lot of policies come my way from somebody else. So let me let you listen to this voicemail that I got from my security dude the other day.

(Voicemail plays.)

BILL ANDERSON: Yeah, I’m sure you guys have one of those you have to answer to every once in a while as well. But he’s got a point, we do want to protect ourselves against things like outside intrusions. So Firewall, Forefront, and patches are an important thing for us to do.

So let me walk you through the process to do that. I’m going to start off with the NAP console, or the Network Access Protection console that you see here. We’ve got three health validators that we’ve installed for this particular scenario, a generic Windows one, a Forefront one, as well as Configuration Manager. Now I’m going to start with his first two, which is the Windows firewall and the Forefront client. I actually think we’ve done this already, but let’s go double check and see how easy it is.

So I’m going to open up the Windows Security Health Validator. I’m going to click on “configure.” And sure enough, we’ve actually gone in and turned the firewall on as a basis for policy for network access. So we’re good there.

BRAD ANDERSON: Now, what you’re looking at here is a set of policies — effectively a model that we’re building that says we are not going to allow any devices to access the network if these settings, these configurations, these updates are not in place. And the first one is, if the firewall is not enabled, quarantine them, enable the firewall, then bring them onto the network.

BILL ANDERSON: Yeah, and that’s the piece we’ll get to next, which is how to fix it when they’re broken, but that’s scenario two.

So I’m going to go in the configuration on the Forefront side and, again, our Forefront team has done all of the work for us. They’ve actually gone in and enabled these policies for things like installed auto start, service must be running. So a system that doesn’t have those characteristics is not going to be able to access our network. So those are two of our three.

Now, the last thing I’m going to do is flip over to my Configuration Manager console. Don’t mind the little flip here while I change VMs. Most of you are probably using Configuration Manager for your security updates. And so what we want to be able to do is very quickly and easily, through Configuration Manager, be able to set that policy on a patch. Now, there’s two ways we can do it: If it’s an update you’ve already deployed out, I can go to that update and just turn quarantine on. But with the exploit time actually decreasing in the market today, in many cases, you may want to do that as one single operation.

I’m going to demo one that we’ve actually already put in production for a while because I heard about this “whammer blammer” thing a couple weeks ago before security guy did. I’m going to go into my all updates and sure enough, I’ve got my whammer blammer patch, appropriately named. Go to my NAP evaluation and go ahead and turn it on. Now, I could turn this on immediately, or I could go ahead and set a future date out there if I want to let it continue to deploy out in my enterprise. I don’t know about you, he sounded pretty mad, so I’m going to go ahead and turn it on right away.

And that’s it. All of a sudden, his request was literally a few clicks. And I’ve now protected my environment for those particular conditions. Now, what I want to do is move on to what the end user experience really looks like for this. Now, Brad, you were kind enough with Edwin earlier to kind of talk about being on holiday and all of your roaming behaviors. What we want to do is basically continue that scenario and indicate that you’re now back in the office.

So what we’ve done is we’ve taken your system, we’ve turned it on, we’ve logged it on for you, and your desktop with your lovely family sitting right here in front of you ready to go log on for the day. (Laughter.)

BILL ANDERSON: We’ve joked about for years about being part of the same family, so I just decided to take a little liberty and add myself to the family picnic. (Laughter.) So Brad, what I need you to do is —

BRAD ANDERSON: You’re not getting in the will, though.

BILL ANDERSON: That’s okay. We’ll talk about my allowance later, dad. I need you to double click on that icon that says “nick prep” and I’ll explain to the folks what that’s really going to do. It’s over on the left-hand side. I know you can’t see with my glaring shirt there. And go and accept.

So what we’ve done is a normal day — what would you normal day when you come into the office be like when you first come in?

BRAD ANDERSON: Well, one of the things I’m known for is I’m a Diet Coke aficionado, so when I come in, I plug my laptop in, then I go to the machine and get a couple of Diet Cokes.

BILL ANDERSON: Exactly. Well, what would normally happen is Brad would come into the office, he’d plug his laptop in, he’d go get a couple of Diet Cokes, he’d probably get randomized by a few people in the hallway, and he’d come back in and, guess what, his system would actually have been remediated and fixed before we came back. But what we basically said was that’s a little weird. We don’t know that we want to demo nothing per se. So what we did was we hooked his system up, we connected it to the network, but we disabled the nick. So he turned the network adaptor on and, Brad, we have a tiny problem here. It looks to me like you don’t meet the criteria to get on our network.

So unbeknownst to Brad, while he was off on holiday, a few things happened. First of all, we approved that other patch. So that’s not on his system yet, so that’s actually being assessed for applicability right now. But remember those other two thing? Firewall and Forefront. Well, it just so happens one of us kids were mucking around with your laptop and downloading some software while you were on holiday. I can only guess which one of us it was.

But what we did was somebody installed some software and it wouldn’t work, so we disabled the firewall and when the software came down, there was actually some malware embedded in it that’s actually shut off your Forefront agent. And so the three conditions that our security officer was so concerned about are actually now being enforced on Brad’s laptop.

Now, Brad, we opened your HR Web page, because I know when you’re back in the office, you’re going to do your time reporting.

BRAD ANDERSON: Report my vacation time, yeah.

BILL ANDERSON: Yep. So go ahead and open that up. Now, refresh that really quick. And what you’re going to find at this point in time is that because we’re using IP sect for our security enforcement, all of our critical systems on the network are actually protected from systems that might be seen as unsecure. So as this resets and refreshes, you’re going to find that it’s going to give you a service not found error. So it’s protecting the rest of our critical data.

Now, this is going to take another minute or two, so I’m actually going to flip to another scenario. We talked a little bit about taking a security update and being able to do NAP enforcement with that. But many of you have your own custom software inside of your enterprise. Wouldn’t it be great if you could take an update for any of your line-of-business software and actually be able to put these policies in place with that? Well, that’s what we’ve done.

One of the things I’ve been dying for for years is to finally get a flux capacitor at work. And we got one. But it actually comes with a burden of some software we had to write internally. Unfortunately, our developers made a few security mistakes. And so we’ve got some critical updates for our flux capacitor software that I need to get out there, and I need to make sure that no one is allowed on the network without those updates.

BRAD ANDERSON: You know, I’m sitting here thinking to myself — whammer, blammer, flux capacitor —

BILL ANDERSON: This is real-world technology stuff.

BRAD ANDERSON: You’ve got too much time on your hands.

BILL ANDERSON: This is —

BRAD ANDERSON: All right.

BILL ANDERSON: Anyway — so now I talked earlier about being able to approve an existing update that’s in production or be able to do one on the fly. For this one, we’re actually going to go approve for NAP as part of the deployment process. I’m going to actually just go drag and drop it over to the simple deployment template. We’ll take the default name so I don’t “fat finger” it.

We go ahead at this point in time, these are all my scheduling parameters and just like I did earlier, I can actually turn on quarantine, enable it as soon as possible, and finish out. So now all of a sudden, a custom update for a custom piece of software that we’ve written, we can actually still go protect the rest of our system and our network. And the demo and time gods worked great for us, Brad, you’re remediated. Why don’t you refresh that Web site again and make sure you can get network access.

And so what we’re able to do is actually take these rich policies and, like I said, use them as a guardrail to allow users to still be productive but maintain the safety and security of your systems.

BRAD ANDERSON: That’s great. Thanks, Bill.

BILL ANDERSON: Thanks, Brad. (Applause.)

BRAD ANDERSON: I think one of the things that I think is most impressive — you know, this quarantine idea, as we started down this path, you know, three or four years ago working with the Windows Server team. One of the things that we were incredibly nervous about what we’re basically injecting a set of services and a set of checks, you know, before the user even gets access to the network. And if we go about this in a wrong way, we could actually cause a flood of calls that come into the help desk.

And so we went to a lot of work to make sure that it was very simple to configure — and as you notice what Bill was showing, as you approve a patch, right in that motion, right in that work flow you also have the ability to NAP-enable it. But we’ve done some other things as well. You know, you can actually go in and say, listen, I don’t want to start enforcing NAP or quarantine based on this software update or this policy, but I want you to come back and tell me if I were to do that, how many work stations or servers would be impacted.

So you can actually go through the “what if” scenarios before you actually start to enforce these types of things. But at the end of the day, this is a great solution all built on software. Doesn’t force you to go and do any updates to any of your hardware to protect your systems, to protect your network, to protect your servers from things coming in on the wild. So it’s a great solution. I really encourage you to take a look at that as a part of Windows Server 2008 as well as Configuration Manager.

So let’s now talk for a minute about the final area of investment inside of Dynamic IT. We refer to it as services-enabled. And there’s multiple aspects to this. Bob talked yesterday about how we’re going to be coming out and giving guidance about how an application should be architected to be stateless and to be highly scalable in a services type of world. But when we think about services and we think about how that gets delivered down to administrators and to end users, we think about services in two different ways, and I’m talking about the actual service that end users are going to consume.

First of all, we refer to what we call finished services. And a finished service is just a true service where there’s no infrastructure at the customer site, the entire application or service is running up in the cloud, for example on Microsoft’s data centers, and an end user can access this anywhere that they have a browser. We’re talking about user-centric computing and dynamic desktop, in its very definition, a finished service enables that.

Now, if you think about, you know, finished services in the world, what comes to mind? Is there a company? Is there a solution? What comes to mind when you think about finished services? How many of you thought of Windows Update? By a show of hands, how many of you thought about Windows Update as a finished service? I would submit that Windows Update is actually the world’s largest service.

Anybody want to guess how many PCs we update every month with Windows Update around the world? Last month we updated over 600 million PCs around the world. That’s 60 percent of the world’s PCs that were updated through Windows Update. We understand how to deliver software as a service. We’re delivering if not the largest, one of the largest software as a service solutions in the world.

So now let’s talk about the other way that we think about services. And we call these attach services. And an attach service leverages the exact same infrastructure that the finished service leverages, but it gives the administrator a little more control because it leverages infrastructure that’s on-site at the customer premise. And as we think about how we build software, we actually build the software in a way that we can use the same bits in our finished service and in the attach service. We want this symmetrical approach so that you, the customer, can make the choice on how you want to use that and our technology enables you to use that as a full finish service, as an attach service, or fully on premise.

Now, if Microsoft Update or Windows Update is the finish service, what do you think the name of the attach service is that uses the same bits? It’s WSUS, it’s the Windows Server Update Services. Interestingly enough, WSUS is the exact server that we run in our data centers that runs Windows updates. WSUS is now a component of Configuration Manager and when you add up the desktops that are updated around the world through WSUS and Configuration Manager and Windows Update, the technology coming out of my team updates over 800 million PCs around the world every month, that’s 80 percent of the world’s PCs.

Okay, that’s an example how we think about finish services and an attach services. We talked about service pack two and the fact that we’re delivering an attach service for asset intelligence. We have the same symmetrical approach now for our inventory and asset management capabilities. As a part of MDOP, we have an asset intelligence service which is a pure service that runs from Microsoft’s data center and allows you to scan your systems, return that scan up to the Web, and we actually start to put metadata and a bunch of business reports around that.

We talked about how the service pack two now — we’re taking that exact service and we’re now connecting that with your configuration management database so that you have this connection that is updates are made by the community up in the asset inventory service, those assets slow down into your enterprise. You also have an administrative console in your enterprise that allows you to fingerprint your own applications, input your own data, then you can choose if you want to send that information up to the cloud for use by the rest of the community.

So software as a service, you hear Microsoft saying software plus services, software and services. I wanted to make sure you understand two real examples today that are shipping, that are being used broadly, of how we think about finish services, attach services, and how we build these in a symmetrical way that allows us to use the same bits in both ways.

Now, one of the teams inside of Microsoft came to me a year ago and they had this concept of how to take all the knowledge, all the data that we collect with Configuration Manager and Operations Manager and using an attached service, make use of that data in a way that really gave you a view into your business and a view into your technology like I had never seen before in my life. You know, every once in a while one of these things comes along where you kind of go, you know, wow, that is a game-changing technology, or that is a game-changing solution.

So what I wanted to do is give you a demo of this. Now, this is a technology preview. This is a combination — these are actually live bits that we’re actually using where we have customers feeding into it. We’re not announcing the release date of this, but I want you to understand the power that can be had and the power we can put into your hands as you take this concept of on-premise combining that with an online service. And to do that, we’re going to have Neal come out and give us the demo. Welcome, Neal.

PARTICIPANT: (Applause.) Thanks, Brad.

So before I get started, I just want to give a little background of where I come from. I was in a group before I joined Brad’s organization, it was called the Exchange Center of Excellence. And we went out and we looked at all the problems that customers are having running Exchange, Microsoft IT had, we went to customers service and support, we looked at all the issues that we were having, all the incidents, all the critical situations. We analyzed those problems and we came up with some sort of fascinating conclusions.

The number one was that over half the problems could be solved for customers running Operations Manager configurations —

BRAD ANDERSON: Not only solved, but prevented.

PARTICIPANT: Prevented. Yeah, exactly. If you look at it, we say, well, what could we do? What solutions could we build? We can’t, so take it to the next level. And just to show this off, we’d like to show this off and just to say, Brad, what are some of the common questions that you are asked?

BRAD ANDERSON: Yeah, you know, to answer Neal’s question, as I go out and I talk with customers and we’re talking about Operations Manager, Configuration Manager, and talks about Exchange and SharePoint, you know, one of the most common questions I get is, “What are other customers doing?” What are they doing that I’m not doing? Are there best practices that I’m not aware of? How do I compare to other organizations that might be in my same industry or have a similar environment as what I have? Distributed, centralized, you know, the number of desktops, those types of things.

BILL ANDERSON: And that’s exactly it. As you look across service management and IT, we have this concept where you can’t manage what you can’t measure. And so when you look at what we’re trying to do, number one, show you how well you’re performing today. Here’s a service that we provide where we actually bring back data from your operations manager and your Configuration Manager environment, we bring it back, we strip out all the personal identify information, we bring it back to the cloud, and we layer business analytics on top of your data.

As you look here, this is the Configuration Manager 2007 scorecard. You can see this is real data. This is Microsoft IT. We run about — manage about 170,000 clients in Microsoft IT. And you can break it down by our geographic region.

BRAD ANDERSON: Yeah, so to give you a little more view of what you’re seeing here, what we’re doing is we’re taking the data out of Configuration Manager, as Neal said, and bringing that back into the cloud, putting it into a scorecard, and then allowing the administrators of the organization, the IT professionals to come in and take a look at that.

One thing I want to point out here. You know, you look at that top line there, it talks about client availability. You know, what we’re saying here is that 85 percent of our clients are available to be managed right now. And, you know, one common question I get is what is the benchmarks? Should it be 99? Should it be 80? Should it be 72? Well, it depends on your organization.

At Microsoft, you know, we’ve got developers who are reimaging their PCs every day, maybe powered off. What we have found with our historical trends is that about 85 percent client availability is about what we see on an average day, so we consider that to be green.

PARTICIPANT: And this is what it’s about. It’s business analytics, what do you need to run your service as a business? Two weeks ago on our site migration, we brought up a new site. We knew we were going to be red. Red is okay. But we also set a target to be back to our service level agreement in two weeks. We set a target, we hit the target, we called the project success. We ran the IT as a business. This is what we have.

BRAD ANDERSON: So this is great. So this is showing Configuration Manager data, but what about the other applications I may have in my environment?

PARTICIPANT: That’s absolutely correct. So this isn’t just about Configuration Manager. As you can see, we have Exchange, we have Windows, we’re going to have SQL. So let’s look at the Windows Server.

Here we’re looking at basically Active Directory metrics. And as you can see, we do pretty well, you know? But as anyone knows here, once you show that you’re doing well, the CIO is never happy. What’s the next question? How do we run cheaper but keep the same service level? And, again, the business. How do we improve our service? Looking at these metrics, we can actually go and look at our CPU utilization.

Yeah, our target was once 30 percent, but we’re running at nine percent. Maybe there’s a concept of virtualization, maybe we should do some server consolidation.

BRAD ANDERSON: Yeah.

PARTICIPANT: But if we wanted to take that project on, how do we plan a strategy? Where do we investigate first? Again, we group our servers in hierarchies on how we want to look at this. And how do we look at Europe? We have two percent. If we’re going to start a project of virtualization, where do we start? Do we start in the Americas? No, because we’re still around 15 percent there. But why not start in Europe? Two percent? It’s a great place to start.

BRAD ANDERSON: Fantastic. And you can actually drill down into this and actually see server by server what the utilization is on each of the servers. So taking this data that comes out of Configuration Manager and Operations Manager, bringing it up into the scorecard that gives you this very easy-to-use view into what your systems are doing, how they’re functioning, allows you to make your business decisions, enables you to make those business decisions.

PARTICIPANT: And it gets better. As we look at this, you always ask the questions: How can we improve? And when I was in IT, I really didn’t know how to improve. I would search online at Ask, I’d post questions on forums. But what I needed was data, NIT data is the true asset. Using the service, we can actually go to a community of data. We have the knowledge at our power. If I wanted to see — if I wanted to improve my service, what are other people doing and what can I learn from them?

And, for example, here I’m looking at server availability. I want to know how I’m doing versus my peers. So I’ll look at just high-tech software, only companies my size. Yeah, I want to talk about customers that are having the same problems I have, and I’ll look at the top five people over the last month. So here we go in April, and here we have a list of the customers. And we can see that we’re running number two. Well, you know, we should be pretty proud of that. But let’s say in some instance I want to see what that company number one is doing. As you can see, company one, it’s stripped, there’s no information that could ever portray who that company is.

BRAD ANDERSON: Yeah, but imagine the power here now. You’re actually able to get a baseline comparison about how you are doing with respect to other organizations that may be similar to you, that may be different, but you can actually get a baseline. And there are some other amazing things we can do.

Okay, so now I can actually see that somebody is doing better than I am, but can I actually see what they’re using, what hardware they’re using, what their policies are?

PARTICIPANT: And that’s what the key is. Are they managing their infrastructure in a different way? What servers are they using? What configurations are they using? Why can’t we share this today? Real data. Just going back to company one, why are they the best performing company. Let’s look at their hardware, let’s look at their scorecard, let’s look at their configurations.

But this enables so many scenarios. If you wanted to upgrade to the next version of Exchange and you have a whole new deployment, just go see what the best people are doing. What hardware are they using? What software are they using? What configuration? That’s what it’s about, real data to help us make decisions that we’ve never had before.

BRAD ANDERSON: Now, actually, there’s even one additional thing we can do here. Wouldn’t it be great if now that we have this data back in the cloud, as our support organization, as our consulting organization identifies issues or problems, challenges, wouldn’t it be great if we could actually inject that into the system and have the system automatically identify customers that may run into similar problems and proactively notify them?

PARTICIPANT: And that was the dream. I mean, when you’re in IT, you want to know about problems before they happen, not afterwards. You don’t want to know how do you get back up and running as fast as possible, tell me about it beforehand. So the dream was always that when someone calls in for help, if one customer has a problem, we can share that knowledge with every customer. And that’s what we’re doing here. As you can see, we actually have notifications. We’re actually going to take every item that comes in from our customer service and support, we’re going to feed it into this engine, and we’re going to come down to you. Notify you on the specific server.

This is specific knowledge for the specific customer. We’ll be able to map the problems back to you. And this is the way that we look at things in the future. And this is real, by the way. We actually are on-boarding customers today. If you’re interested, there’s two ways to get in contact. I’ll be down at the booth, also e-mail [email protected], we’re on-boarding customers and we’re ready.

BRAD ANDERSON: Yeah, thanks, Neal. Let’s give him a hand. (Applause.)

Does that really begin to sink in, kind of the things we’re able to start doing with that now? You know, when Darren and Neal brought this concept to me last year, you know, it’s just one of those immediate reactions where you go, you know, man, that is just so incredibly powerful. The ability to actually be able to benchmark yourself, baseline yourself, get information from the broad community about how they’re managing things, how they’re doing things. Really is going to drive your service levels up, is going to drive your technology to a new level. So I think this is a fantastic thing. Again, if you’re interested in this, e-mail [email protected] and Neal and team will be in contact with you.

So let’s summarize: We’ve spent the morning talking about Dynamic IT and specifically how the areas of investment in Dynamic IT enable what we call the dynamic desktop. And at the center of the dynamic desktop is this intelligent and adaptive solution that’s able to understand the working environment, the context of the user. The user becomes the center of the model, and not the device.

We talked about these trends in the industry, these trends in the demographics, mobility, consumerization, people wanting to bring their lifestyle into the workplace and combine these two. End user computing and focusing on that user and making that a user-centric computing model rather than the desktop is really the key of what we’re going to enable over the next several years.

Again, we talked about Dynamic IT being a ten-year vision, we’re five years into that. As I think about the major areas of investment that we’re making, this is the single biggest area that we’re making of investment with respect to how we think about the client of the future. These are radical changes and they’re changes that are being driven by the demands your users have for you.

So let’s now talk about the System Center roadmap. This is the roadmap of where the future releases are coming out. So a couple things I’ll draw your attention to. First of all, in 2007, it was just a massive wave of innovation that came out from the System Center team. New products, major updates to all of our products. You’ll see that happen again in 2010. The scenarios that Bob talked about yesterday around the dynamic data center, the scenarios that were talked about today — you know, many of those things are available, but this is a journey. And some of the things that we’ve been talking about around really changing that paradigm to center on the user will be delivered in those 2010 set of releases.

But we’ll also deliver value in the interim. The R2 release of Configuration Manager and we’ll continue to deliver you value in-between these major releases. One of the things that we’ve spent a lot of time on is making sure that we got into a regular cadence of releases. And like Bob talked about yesterday, we’ll be on a cadence where every three to four years we’ll do major updates of our products with interim releases in-between them.

As I thought about a way to end, you know, this is the way that I thought that we would end, but really thinking about that user-centric computing, I thought to myself, what is something that we could do that would really hammer home this concept of enabling the user wherever they are? Wouldn’t it be nice if you could put — my hotel key — wouldn’t it be nice if you could put your applications and your data on a device that you carry with you?

I’m going to use a thumb drive here, but this could be your cell phone. Right? How many employees have cell phones? And wouldn’t that be nice if you could come up to a desktop, plug in that USB, and because it has my applications and my data on it, my applications and my data appear on that device delivered from that thumb drive or from that cell phone.

What I can then do is I can do things like go into my computer, come to that removable media, and I really like that hero slide that Bob Muglia ended up with yesterday. And look what’s happening on the bottom here. It’s actually installing PowerPoint onto this device for me from the thumb drive using application virtualization. This is truly user-centric computing. My data, my applications follow me. It adjusts. It intelligently adapts to the way that I want to get my business done. Thank you. (Applause.)

We want to partner with you on your businesses. We have got some amazing things that we’re working on. I hope you enjoy your time here over the next couple of days, but our vision about how we deliver the dynamic data center and how we deliver the dynamic desktop, we believe is so far out in the lead that we want to be here with you next year and the year after. Next year, we’ll actually be demonstrating the beta technologies that we talked about today.

Thank you very much and enjoy the rest of the day and the rest of the conference. (Applause, music.)

END