Speech Transcript – Rick Devenuti, MEC 2002

Remarks by Rick Devenuti
Corporate Vice President & Chief Information Officer
MEC 2002
Anaheim, California
October 9, 2002

ANNOUNCER: Ladies and gentlemen, please welcome Microsoft’s Corporate Vice President and Chief Information Officer, Rick Devenuti.

(Applause.)

RICK DEVENUTI : Good morning. It’s a pleasure to be here at MEC today to tell you how we do IT at Microsoft. There’s really three things I’m going to try to accomplish today: Give you some background on IT at Microsoft, what we do and how we do it. We’ve got three demos planned today to give you a sense of where we look to add value as an IT organization to Microsoft. And then assuming we have time, we’ve got some mikes set up for Q & A.

Now, I’m told I’ve got until 10:15 to get through these three components, but I’m also told it’s a MEC tradition to get up and leave after about an hour, so I’m going to do my best to try to get this down and we’ll let you decide about whether we do the Q & A or not.

First of all, I want to talk about the infrastructure and how we think about this and give you a sense for what we are. We think of ourselves as an enterprise IT organization. In terms of scale, we are worldwide. We support a little over 450 sites throughout the world. These sites you see on this map are actually the end points of our network, the regional data centers of which all of the hubs, the other 400-plus hubs sit off. In terms of size and scale, we support about 70,000 people. That’s our 50,000-plus employee base plus contractors and vendors who we think of as part of the Microsoft family.

Like most CIOs, I tell you we have too many PCs. We average a little over two PCs per user. Because of the development nature of what we do, virtually ever developer has two or three PCs on their desktop.

We manage just over 7,000 servers and a little over 8,000 pieces of network gear to make this enterprise work. All monitoring is done in Redmond in our global operations center. And that large network number has really doubled in the last two and a half years as we’ve rolled out wireless LANs in our facilities virtually around the world.

We’ve run on a core basis about 400 apps to get our job done, and we continue to look at ways to reduce that number and bring more value to the business.

As you’d expect, we’re a big user of mail and the Exchange platform. Today we have a little over 100 Exchange messaging servers, and that’s a number I’ll come back to talk to you about because it’s one of the exciting things we’re looking forward to reducing with the Titanium rollout.

That’s a framework of what we do, and for a company our size we think we look like any other IT organization. Our mission, like most IT organizations, is to be proactive, to make sure the applications and infrastructure meets the needs of our user base, which we define as customers, clients or Microsoft employees and the partners we work with, making it easy both to work with us and get people’s jobs done.

So, in that very vanilla statement, I think when we talk to peer organizations we look very much like the job and the task that they have.

What makes us different is really two things. One, we’re a completely Microsoft shop. We run our business on Intel servers, and standardize on the Dell and HP machines, and we count on those partners to make sure we have hardware that we can run our business on. We run only on the Windows platform, and we work with ISVs to make it possible for us to deliver the goods and services we need for our customers and clients. So in that sense we’re very homogeneous and different than most of the customers we do talk to.

The other thing that makes us different is our mission and our priority. Our number one priority is to be Microsoft’s first and best customer. And what I mean by that is to run the business on beta software and be the first customer to understand what’s right and wrong with our product and whether it’s ready to ship to our customers, and I’ll come back and talk to this in the next slide. Internally we use the term
“eating our own dog food”
to be this first and best customer.

We also want to provide thought leadership. You won’t be surprised that I get a lot of advice on how to do my job from various parts of the Microsoft family. We want to set a coordinated IT strategy and make sure the architecture and our application and infrastructure work together and continue to grow, both as we grow organically and as we add acquisitions like Great Plains and Navision.

And we want to run a world-class utility, and what we mean by that is a measured world-class utility. And so it’s not dog food or utility; it’s both. We have a very disciplined scorecard methodology where we ask people to be accountable to the reliability, the performance and the cost of the service they run.

So, for example, Derek Ingalls, who’s in charge of the messaging environment, is responsible for hitting availability, reliability, performance on the messaging platform and every month we sit down with a scorecard and go through how we’re doing on messaging by region and under regions by country, and we measure that as a total availability number with no credit given for planned down time because we’re going to roll new bits on these servers consistently. There’s no room given because of the fact we’re running beta software. The task of the organization is to use beta software and run a worldwide enterprise.

It’s not surprising that when we do beta software we get great support from the product group, but I want to tell you as an enterprise if you’re not in the group whose product we’re testing you’re like any other user, expect 24/7 performance and doesn’t understand why the IT organization can’t produce that.

Let me go a little bit more into what being first and best means because this is something that has evolved considerably since we first launched this as our priority roughly three years ago. At that time we always made it our mission to try to test our products before they shipped and make sure they work. We hadn’t agreed with the product group on shared goals and what we were trying to do, so we really were more of a test organization than we were a customer.

And when things didn’t work, when we couldn’t get a product to work in its beta form we’d constantly hear things like,
“Well, that’s not how a real customer would implement,”
or
“real customers don’t think that scenario is important.”

And so over the course of the last two years, two and a half years, we’ve learned our lesson from Windows 2000, we’ve learned our lesson from the Exchange 2000 rollout, and if we’re going to roll out beta software today at Microsoft, we start with a set of shared goals. We sit down with the product group and we agree what it is that we’re trying to do. It’s scenario-based. It’s no longer just how many servers can you get it on or how many desktops; why are we doing it, what’s the business case, what’s the scenario we’re trying to prove and under that type of scenario what type of availability should we see, what type of costs should we see, what do we expect is the experience our customers will have. And is the scenario we’re using one that most customers or at least many customers will see as valuable to their enterprise?

We then agree, based on those scenarios, on a set of goals based on timelines, so how many servers and what scenarios will we do on beta 1, on beta 2, on the release candidate, what we call RCs, and what do we need to look like and what level of availability do we need before the product can ship.

We’ve now evolved through a process where products won’t ship unless we and the product group can met our shared goals.

And so when we do that document we actually have locked agreement. It’s IT’s job to make it happen, to get it deployed and to make it work. Where we can’t make it work, there’s a technology blocking issue, it’s the product group’s job to step up and give us a new build that will make that work.

And so we use that period between the betas and the release candidates to get great feedback to the product groups on the quality of the scenarios we’ve developed together and the experience we’re having. We believe it’s critical to make sure these products work in our environment before we ask you to try to implement them in yours.

And so to do that and still maintain a viable enterprise that takes a great deal of planning and testing, before we start to implement, we don’t just throw bits on the server and I’ll give you some examples of that as we talk about Titanium.

And so the job is dog food and running a world-class utility. Perry Clarke, who runs the group that interfaces between IT and the product group to get those shared goals, puts it very succinctly: It’s not either/or; it’s both. Yes, it’s a hard job. If you don’t want a hard job, go someplace else.

We really have to make sure that in order that our products are ready for you that we’ve tested them thoroughly. On the other hand, our constituency, our user base expects to be able to get in their office or at home or on the road, connect to the infrastructure and be able to do their job and so we balance them.

It’s really very simple: To get the environment to work today we need people, we need process and technology. I’ll look forward to the day that the only reason we have down time in our environment is that we’re on beta software and there are no people issues, no process issues. And so we’re focused internally on getting all of those eliminated so that the only block in our environment is ever the fact that we’re running on dog food or beta software.

Let me give you an example about how we’re doing that in Titanium. We have multiple forests that we use to segregate our environment so that we can take more chances on the infrastructure without impacting the entire company. In the case of Exchange we’ve got three forests we’re using to roll out the product. The dog food environment is actually owned by the Exchange team, and that’s the Exchange team’s environment where they run their own messaging servers and they’ll put new builds on those servers on almost a daily basis as they’re building through the first beta of the product.

As you would expect, availability is not very good in that environment but there is a lot of learning and we in the IT organization interface with them in order to understand what issues they’re seeing and what we can learn before we start to move into our own deployment.

When they’re ready to release a beta we require two weeks of availability within the dog food environment so we ask for two weeks of 395-availability on that environment before we’ll think about moving it into the next phase. And so as they’re ready to hand those bits off we take them into IT, we put them in a forest called Win Deploy. Win Deploy is a forest we’ve created with about 6,000 users in it from the Windows development team, and we use that as a place where we typically put new Windows or new .NET Server bits on the server so we can understand when we change the server component what happens to the infrastructure. So we’ve got a complete infrastructure with the main controllers, Exchange servers and about 6,000 people.

We’ll take the Titanium bits and right now we’re in our first week of deployment within the Windows deployed forest.

When we get one week of solid availability within that forest we’ll move to corp so we’re a number of days from rolling out Titanium into the corporate forest and to the rest of our user base.

With Beta Two our goal is to have 15,000 mailboxes available in corporate, people using Titanium in the corporate forest before we ship beta two to our customers.

At the same time we’ve also got people in the Windows, Exchange and Office team using Titanium with Outlook 11, the coming product of Office 11, so that we can really test the great features that come with using Titanium and Outlook together.

By the time we ship the product and RTM we have agreed that 100 percent of our mailboxes be running on the Titanium platform and we’ll have the availability goals that we’ll agree to over the next few weeks before the product ships to our customer base. And so this is how we go methodically from taking beta bits and putting them into the environment, allowing us to truly test the environment in Redmond first, then out to the rest of the world and then to our customers.

Now, you would expect us to do that and as an IT organization we’re very pleased with the role we play in making sure the products are ready. As an IT organization and as a CIO there are also other reasons I’m excited about this product. The first is the leverage, the way we can leverage Windows .NET Server 2003. Windows .NET Server 2003 has better clustering. We’ve really continued to increase the clustering support that comes with it, and Titanium is taking advantage of all those features.

We also, with Volume Store/Copy Service, VSS, we’ll have much better ability to recover data if we lose a server.

And so these two things together leveraging Windows allow us to look forward to a plan that has much bigger servers. Today we’re running servers the size of about 3,000 in the Redmond campus. Our rollout plan for Titanium is to go up to 5,000 users, because with clustering we can offer much greater availability with much less risk of down time for end users, and with VSS if we do lose a server we can get it back much quicker, no longer taking days to restore that very large server.

Naturally we think about that as server consolidation, but we did on the Redmond campus most of the server consolidation with the benefits we got out of Exchange 2000. Where we’re really looking toward server consolidation is in our field regions, in those 70 sites around the world where we have an Exchange server, mostly because of the historical issues with latency and the 5/5 environment. We really didn’t change our environment substantially when we rolled out Exchange 2000.

And with Office 11 and the features in Office 11, primarily the cache mode feature, with the improvements of the Exchange environment in Titanium, we believe we’re going to have the best experience ever both offline and online with Exchange as we roll out these two products together. That’s going to allow us to reduce the number of servers we have out in the field, bring them into the RDCs and then from the RDCs eliminate some of those regional data centers to reduce our overall total cost of ownership.

Most importantly, we’re going to do that by giving our users a better service than they get today. Because of the clustering and the storage we’re going to have bigger servers that will be more available than their individual small servers out in the field today. And because of the features and experience that comes with Office 11 they’re going to have better online and offline experience.

So we get to win both the client satisfaction in how they’re interacting with mail and from an IT perspective we look at substantial reduction in TCO and complexity; smaller number of moving parts and more centrally located where we have people to support them.

Today we’re running the MOM for Exchange. We’re running Beta 2 of SP1 on Exchange 2000 and on Titanium. This is an important step forward for us. MOM as a tool we use to monitor Exchange is an important step forward to us because we’re running historically on a homegrown tool called Prospector. And what that meant to us is that that global operations center that does all monitoring of servers and networks worldwide was separate from the team that was monitoring our Exchange environment, because we had different tools, using MOM on the server side but using Prospector on the application side for Exchange.

With the rollout of MOM to our Exchange team we’re going to be able to move the triage function into the global operations team, letting our Exchange operations team do higher value-add Exchange processes and so the opportunity to reduce the number of people we have looking at the Exchange environment, make it part of our standard triage and monitoring system and allow our Exchange add man to do much greater value add is one of the things we’re looking forward to as we roll out Titanium across the company.

Of course, with Titanium, mobility is built-in. We believe mobility should be a commodity. Everybody should be able to roll it out and within an enterprise anybody that should get mobility should be allowed to have it. We at Microsoft believe most of our employees should have a mobile experience, and with Titanium we’ll give them the best mobility yet. Whether it’s the Outlook Mobile Access feature, OMA, the replacement for MIS as we think about it today, which is integrated into Titanium; whether it’s OA, which has improved in features and reliability and most importantly for me comes with spell check so when I use it away from the office I can actually send mail that people can read; or whether it’s the ability to cache over the Internet to your mailbox, the mobility features of Titanium and again the next step and because they’re so integrated with Titanium, the ability to roll those features out we believe are going to be much easier than it was for us internally to bring mobility to our user base over the course of the last few years.

And it will be the most secure version of Exchange we’ve ever had; more secure because of work we’re doing, because it’s the first product, the first Exchange product to come out as part of the trustworthy computing initiative, because of all the lessons we learned going through Windows, going through SQL, going through Visual Studio to make sure our products are more secure, because OA will now support SMIME and provide a more secure environment for those people who truly want encrypted mail at the desktop.

So we’re excited about the product from a company perspective. We’re also very excited about the product from an IT perspective. And the fact that together with Windows .NET Server 3.0, we’re going to have the best experiences and the most reliable Exchange environment we’ve ever made.

Now, the same story is true on Windows. We’re well into deployment on Windows .NET Server 2003. We’ve got over 1,400 servers running the environment today. That includes all of our domain controllers and two 64-bit domain controllers and we’re very excited about getting 64-bit onto the domain controllers.

We’ve moved it into our RAS and wireless environment. We have implemented 802.1x on our wireless access point to give us better security on our wireless LAN. We rolled out wireless over two years ago. It has been a huge productivity feature for us. When we realized we weren’t as secure as we needed to be we had a decision to make, which was either roll out 802.1x very quickly or take away that feature that our client base has truly told us is one of the best implementations we’ve ever given them from pure productivity. And so we rolled that out and have since rolled it over to .NET 2003.

We’re moving our line-of-business apps over. Today 13 of our major line-of-business apps have moved over and we’ll move another seven over the course of the next month. And when I say major apps, I mean the ones we truly use internally to run the company. We have a single instance of SAP sitting in Redmond, a single instance worldwide that we run our business on.That has already moved to RC1. Microsoft.com has moved to RC1 of 2003. MS Sales, which is the single database we use, data repository for explaining to the sales force what’s happening on the sales perspective, so the ability for sales and finance people to look into a data warehouse and truly get sales revenue by account, by region, by sales rep.

These are tools we use every day to make decisions to run the company. They’re all running on Windows .NET Server 2003.

And about six weeks ago we slipped a bit to go to Forest Functional Mode, or what we used to call Whistler Forest Mode. We’re excited about that because it gives us additional functionality that we didn’t get in Windows 2000. The two key features that we were looking for that we get by going to Forest Functionality Mode are, one, the knowledge consistency checker, the ability that Windows .NET Server 2003 has when you put a new domain controller in a site to look through the topology of the environment and automatically decide where it’s going to replicate and make the most effective use of that. That was a feature that was in Windows 2000, but it had a top of 200 sites and so after 200 sites, you had to turn that feature off, and that became a very time intensive administrative task to make sure the domain controllers were properly aligned in the topology. So that top has been removed by going to Forest Functional Mode.

The other feature is trust between forests. As I said, we have multiple forests and in Windows 2000 to create inter-forest trust you had to make a trust connection between the two forests and with every sub domain or child domain in that forest to the child domains in the other forest, so a very time intensive use of our people’s time to make sure we had a secure environment in the multiple forest environment we run at Microsoft.

By moving to Windows .NET Server 2003 and Forest Functional Mode we get that automatically out of the box. We make a Kerberos secure connection between the two forests and the child domains fall in place; a huge productivity gain for administrators within the IT organization.

Now, there are other wins we get by moving to Windows .NET Server 2003. I don’t know if that rolls right off your tongue. It certainly does mine. Most importantly for us is replication for media, the ability to bring back a server for media rather than having to have a DC replicate over the wire. So if you lose a server or if you want to bring up a new server, instead of taking potentially days at the end of a slow link we can replicate it for media whether that’s on the hard disk, whether that’s on tape, whether it’s on CD, depending on the size of your .dit files.

This significantly enhances our disaster recovery strategy for sites around the world. It allows us to go to a single DC where we had two before, because of the fear that if we lost a server or if we ran into a bug as we rolled out beta software that it would take days to get that site back up, depending on how far it was in the topology and how slow a link we’d put that on.

So the ability to capture the file and replicate for media is a huge win. These wins don’t require you move to Forest Functional Mode but they’re big wins that come out-of-the-box with .NET Server 2003.

The second big win for us was single instance storage. Windows .NET Server allows attributes that are common to be stored one time instead of with each object. And so we’ve gotten about a 60 percent reduction in the .dit file since we moved to Windows .NET Server 2003. Again, a smaller .dit file means the database can be stored now on the hard disk, so if we do need to replicate for media, today we’re doing it from the hard disk as often as possible; again, less time, we’ve got a backup right there so the replication that does have to take place to get that DC synched with the rest of the environment is just looking at the changes since the last backup. This is again a great win for us as an IT organization in terms of how we think about disaster recovery and how we think about availability of the domain controllers.

And finally DC rename: Under Windows 2000 if we needed to rename a DC, we had to demote it and then we had to promote it. Now, demote was easy, but again bringing it back up and letting replication happen to that DC made it such a long process that we tended not to rename DCs unless we had to.

Now, this is administratively. How do we want our ops people to think and how do we want to make them productive? We want our DCs to be numbered, right, one, two, three four. When DC5 is down we don’t want anybody thinking, wow, there’s five DCs in that site when, in fact, that’s the only DC there; we’ve just changed DCs as we’ve rolled out various bits of software or taken servers offline. So by nature we like to have an order for what we do.

We also have different servers in different locations. As we rolled out Windows 2000 we consolidated servers, put more on each server and we created what we call the consolidated server platform, which had the domain controllers. It also had bits that we send out every night, which we call DDS. It might have file and print, it might have user files. And so we had a view of what would be on a server if it was a sys server versus a UPS server versus a biz server.

As we went through some of our security work we realized we needed to get the domain controllers off those servers and so we created a new version of server, a user-based server, and yet that domain controller was named. And so by using DC rename, we were able to rename about 90 percent of our DCs, so that again when somebody in the ops organization is looking at a server that pops up in MOM that we have an issue with, because we’ve taken the time to rename them, because renaming wasn’t painful, they know exactly what that server is, they know what’s on it, they know how to think about what they need to do.

This is, again, great productivity in the heat of battle. When we have an issue, when something comes across MOM and into the triage group we don’t want people to have to take time to think about what that server is or what’s on that server. The ability to use our naming convention and the ability to move that dynamically whenever we make changes has been a huge gain that we got out of the box with Windows 2000 .NET Server.

So we’re excited about the product. We’re excited about the availability and stability of the product. I’m sure over the course of this week you’re going to hear a lot about Titanium and a lot about Windows .NET Server 2003 at a deep, rich technical level.

From an overall IT perspective we’re excited about the features this application and new OS brings to us as an organization and our ability to get a better, more available, lower cost solution out to our user base.

Now let me change gears a little bit and move from the infrastructure to bringing value to the end user through applications and how we think about going forward. It’s our job to integrate technology with the business and to bring business solutions to our business partners through Microsoft’s organization.

Now, I want to do a couple demos here with you to give you an example about how we think about that. In the first case I want to bring out Jeff Sandquist, who’s a program manager with the .NET platform strategy group. So I’m going to ask Jeff to come out there, and I want to talk about a Web service we built almost a year ago to help us expose data that we had within the environment. Jeff, how are you?

JEFF SANDQUIST : Doing good. Thanks, Rick.

RICK DEVENUTI: Good to see you.

JEFF SANDQUIST: You know, a year ago my team was faced with a problem that we had to solve. We had people that were working in the product group that weren’t engaged with Siebel. We needed to get them more tied into our account management team that use Siebel as our CRM application. They weren’t daily users of Siebel so we couldn’t really force them to install the client. They just needed to find out information about accounts and we had to make it really easy for them.

So what we did was with one developer we wrote an integration layer using XML Web services to take our account team’s people in our product groups and tie them into our Siebel installations.

You know what happened? We realized that this was completely infectious and we got such a big return on value that we started exposing other systems — MS Sales, like Rick mentioned previously, Clarifier Support Database, our worldwide events system, Customer Broker and our worldwide marketing database.

This enabled us to have a 360-degree view and add a lot of value to people in our evangelism teams and our product groups and give easy access to this information all across multiple systems.

Also, did I tell you we did this in 100 days? So we had a couple of very smart developers and we were able to move quickly.

Do you want to see what this looks like?

RICK DEVENUTI: Absolutely.

JEFF SANDQUIST: The very first application that this was exposed in was the .NET Factory. So let me just bring this up here and I’m going to give a little demo of it.

Right here is the .NET Factory. It looks like a relatively unassuming portal application where we have headlines that we highlight programs and initiatives for our sales force, but what I’m going to do is I’m going to do a little search and what that just did is it went out to our Siebel installation, calling an XML Web service, it did a query in Siebel to bring me back all the customers of the names that VRP Associates in it. It’s pretty cool.

I’m going to click on this though and what that just did was just loaded from Siebel the high-level account information and gave us real easy access to it.

You know, we have basic account information up top and down below we highlight and give people access to the contacts, the Siebel opportunities for this account and the activity. Now, that’s just Siebel data. The Factory gives Microsoft employees a 360-degree view of other information. So along side of it we can see events from our worldwide events database, service requests that are under way for this account from Clarify, licensing information and also content.

What do you think of that, Rick?

RICK DEVENUTI: Well, I think that’s great and I use it a lot, but I’m the executive sponsor for several accounts and I don’t have time every day to go into the Factory to see what’s going on. Can you send the information to me, just the pertinent stuff?

JEFF SANDQUIST: Absolutely. One of the things we did in the Factory was we have SQL notification services and .NET alerts so that you can have updates sent to you, to your cell phone, mobile device, your messenger client as well as e-mail and that’s when any object change, whether it’s in the factory or outside of it in any of these systems you can be notified when something gets updated.

A really neat thing happened at Microsoft when we got into this project. We started working with a lot of different teams. We had a real grassroots effort of getting different systems exposed as Web services, and it was really quick to do it. Neat things happened when we started pulling other different systems together. If you look at this, this is a Siebel account. It’s tagged up here by vertical industry, by financial services. Microsoft has a number of content stores, probably like you have in your companies. We have a content store for our content management system that our field sales force uses. We have another one that a marketing department uses for customer evidence. And we have actually our own little content store in the Factory.

Well, up here if I look at this customer in financial services there’s a little link down here that says show all content available for this customer’s vertical industry. Somebody in the field wants to get some information to maybe help with a win on this account. They click on this link and down here what I get is all of the evidence from these three different systems all brought back from case studies, various information so that they can take that on their way for that win.

Is that pretty cool?

Another thing that we had was we have people that have different rules in the company. We have people that are developer field evangelists. We have architect evangelists. We have technical solutions people. A lot of times when people are working with an account they want to find a person with a particular role. Well, you know, you might look at the Siebel account team and say, is that person on the account team, and look at that to see if they’re in there. At Microsoft, not everybody is on the account team. There might be six people, there might be 20.

Well, what we did was we do a query looking at Siebel and saying where is this customer located, what’s their sales district, and at Microsoft we use distribution a lot to show community, so people that are developer field evangelists belong to a certain alias called DFE. We have another distribution list for architect evangelists and so forth.

So what we do here is we say show the contacts for this customer’s sales district, select the link and it goes out and it interrogates the Active Directory, the various distribution lists to find the people with different roles so you can see who the developer field evangelists are, where are they located. It’s bringing different systems together using different technologies but absolutely at its core is XML Web services.

This application was built over a very short period of time, but we had a huge return on investment when we did this. So we had this huge return on investment and we made 1,000 percent of our business logic exposed as a XML Web service for this application. Everything we did was done in the Web service.

Well, what happened is it all started with the Factory and we realized that we could take all of that back-end infrastructure and expose it to other applications. So we wrote a very basic application for reading and writing information back to Siebel.

Shortly after that we did a broad application for our sales force so that they can see pipeline information, making that really easy for them.

We brought online a complaint management system, all going into that same back end, so there’s one managed ITG infrastructure but different people working against that XML Web service API.

It doesn’t have to be just Web clients. We can do Excel-based clients using the Office XP Web Services Toolkit to give people ways to read and write that information.

And finally we have in pilot right now, mobility is really important, so we have a Pocket PC version as well and we just make it really easy for people to get access to the data, make it very simple for people to maintain it and we can have a constellation of applications based on people’s roles.

RICK DEVENUTI: Thanks, Jeff. That was great. (Applause.)

JEFF SANDQUIST: Thank you.

RICK DEVENUTI: Now, I told you at the beginning that we think of ourselves as a standard IT organization, and this demo and the tools we’ve pulled together in an environment we called Alchemy is an example of that. We had disparate islands of information: Clarify, which is where we do our support instances, MS Sales, which I already described as that single database of sales information, Clarify where we have all of our account and contact information. And as a sales rep, if you really wanted to do a great job understanding your account, you needed to go to all three of those sources and pull them together; what’s new in the account, who are the contacts, is there anything going on in the global account that I need to know about as a global account manager, what’s happening on the support side, where are we on revenue versus goals.

With this investment in Web services, we’ve been able to pull that all together not only for the sales reps but for anybody else who’s interested in getting that information, myself as an executive, the product group where we try to get our product group people better aligned with customers to understand issues.

The point Jeff made is very true. This started as a single instance with a single person to show the power of Web services. Once the sales and support IT organization saw this, and realized the ability to take all of those other disparate pieces of information and bring it together, it became infectious. People are really using this technology today, starting with a single person, to drive value out of the legacy investment we’ve made.

Well, let me give you another demo. I’ll ask Daniel Kogan to come out. Daniel is program manager with the Content Web services business. Daniel, how are you doing today?

DANIEL KOGAN: Fine, thanks.

RICK DEVENUTI: It’s good to see you. Now, CMS is an exciting product but I’ve got some problems I’m hoping you can help me with. ITG Web is the main Internet portal we use within Microsoft and certainly within IT to explain what’s going on with our environment. Today we get over 40,000 hits a day from people at Microsoft coming to ITG Web to do downloads, to get information on services we provide, IT people coming to look at their metric or to post information, so it’s an important site to us.

Some of the feedback I’m getting though is that our technical authors, the people we use to post there maybe aren’t as user friendly as they could be. And so I’ve got a great communication group, but I tell you Jean and her team are not that familiar with FrontPage, and so we typically have a process we go through in posting this.

How can you help me CMS and ITG Web?

DANIEL KOGAN: Right. Well, I mean, the problem you pose is definitely not unique to your organization. We find a lot of customers have this problem trying to get their content out to their Web sites quickly.

So the great thing is ITG Web, as you were mentioning earlier, is one of our dogfood customers in the CMS group. And in the CMS group, we’ve enabled the ITG Web site by using Content Management Server to allow them to go out and manage their own content, distribute it, and so on. So I’d like to show you a little bit of what we’re going on ITG Web today with Content Management Server.

You’ll see here on the Web site that they weren’t really an early adopter to the point that they’re telling the world when they’re happy and when they’re not happy with our products. The fact is they’ve been happy since day one, so we’ve never had to change our little smiley face here.

But the interesting part about what Content Management Server is doing for ITG Web and the IT communications group is that they now have the ability to go into the Web site, and if they’re an author — in this case I’ve already logged in as someone who has an authoring privilege — and they could just switch to edit mode and choose to edit a particular page or a particular piece of content live on the Web site so in this case they could go in and, say, someday for some reason they want to change this to a frowning server, which they won’t, they could.

But what’s interesting about this is this still required people to go in and edit a Web page or a Web site, and what a lot of our customers have said internally and externally is we want the ability to simply publish content from Word directly to my Web site.

So when the communications group has a new piece of data, a new piece of internal communications that needs to be made available to our community, they’re going to be able to go in and directly from an Office application — and let me just bring up Office, there it is, my Word is down there.

So we’ve written a quick, little article about our new RAS policy, what people need to do to be able to RAS into Corp Net and I want to get this published onto our ITG Web site.

Well, CMS 2002 enables the knowledge worker and really increases productivity by letting them go directly from Office and publish to CMS through a CMS wizard that is now shipping with 2002 that we launched here yesterday at MEC. So we’re going to be able to go in, run through the wizard and it engages the CMS workflow, it sets the lifecycle and the scheduling on when things go out. Once this is done we will now have the content on the ITG Web site available.

So we’re done going through the wizard. If I go back to my Web page, I can refresh, and you’ll see that the content is not yet there because it has not been approved. The content is available for someone who is an editor or an approver or the legal department or my manager to be able to go through workflow, either because it got pinged by an e-mail or maybe they come here as a matter of their procedures every day, and they can go and see what’s waiting for approval.

The content is now going to be available for review for the editor and in this case we’re not going to go into preview mode just to save some time on stage, but I could go in and preview, I could make modifications and so on. I’ll approve this piece of content and then close my dialogue box, switch back to the live site and you’ll see that when I refresh there it is.

Now, that’s pretty powerful. That enables non-technical people to get their content out to the Web.

But more than that, in this example what we’ve done is we’ve said, not only do I want to get my content on my Web site, I want to now make that same content available to other parts of the organization, other groups, other Web sites that want to consume this content. So the way we’re doing that is we’re exposing our CMS-managed content through Web services so we can get syndicated or federated content across the enterprise or perhaps the partners or extranet, and in our example here we have CMS Web homepage, which is our corporate intranet. And you’ll see that they don’t have any of our ITG news on right now, but they could.

And the way they could do this, in this example MS Web is also an internal CMS customer, but it doesn’t need to be. Our target customer or a consumer of this content could be another CMS site or it could be any back-end system that knows how to consume a Web service, so this is really about interoperability of back-end systems with the content front end, and we’re going to open up Visual Studio, which has the new integrated dev tool that we use in the CMS platform, and show you how easy it will be for the IT people or the Web masters of the MS Web group to go in and make a decision to consume the Web service that ITG is exposing their content through.

And all I’m going to do, to save everybody the embarrassment of me typing on stage I’ve sort of got my code already here, and I’ll just drag and drop a very quick text label there and you’ll see it pop up in a second and then we’ll just go to the code behind, and by putting two lines of code in we’re going to be able to get the Web service.

So this is a pretty simple but powerful example of leveraging Web services through content management in the enterprise for syndicating content. So you’ll see that all we’ve done here is just really called the Web service that ITG makes available, and we’re going to go in and save and then rebuild our solution so it will take just a quick second to refresh my Web site after we finish rebuilding, so we have to compile the application.

We’re going to go back to the MS Web site once this is done, making sure everything worked nicely in the build. There we go. Now, this will take a quick second or two to refresh because we’ve rebuilt the application, but when it comes back you’ll see that the MS Web site is now going to have the content that we’ve added to the ITG Web site from Word, and they’re going to do this by consuming the Web service.

So there’s the content from the Web service that is now being made available. And if we were to go back to Word and change the content on the ITG Web site all their partner sites or anybody else who’s consuming this content, whether it’s a CMS Web site or any other target back-end would have the latest and greatest data.

So this is a really good example of taking non-technical people and really empowering them to be able to distribute and federate content across the enterprise.

RICK DEVENUTI: That’s really great. Thanks very much.

DANIEL KOGAN: Great. Thank you.

RICK DEVENUTI: Thanks for coming out. (Applause.)

So there’s a great case where we can not only increase the productivity of the people managing ITG Web, but in the example shown, our change on RAS policy, we can get that out to the rest of the company that doesn’t come to ITG Web by feeding that Web service directly to other sites within the corporation.

The last topic I wanted to touch on today is security, because it’s hard to go anywhere without talking about security and the fact that we need to continually work as an industry and as a company and as an IT organization to make sure our people, processes and technology are really focused on giving us the most secure environment possible.

We’ve done a lot within IT at Microsoft to make our own environment more secure over the last two years and I wanted to go through just a couple of things we’re doing.

Rolling out smart cards for two-factor authentication is a project we’ve been working on for a while. Today, over 27,000 people at Microsoft to connect remotely to Microsoft need to use smart cards. We’re rolling that out worldwide, and by the end of this year the only way to connect remotely to Microsoft will be with an issued smart card, to again gain two-factor authentication to make that network perimeter more secure.

The demo you just saw in the example that was used was our new remote secure user quarantine service. What we’re doing is rolling out right now using Connection Manager as the only connector that people can use to connect to the Microsoft Corporation. And with Connection Manager we can run policies that let us look at the system that’s connecting and make sure its trustworthy, and by trustworthy I mean it’s running the right version of Windows, if it’s running XP that the Internet firewall is turned on, and making sure that the current version of the anti-virus software that we use as standard is issued. If any of those checks are not reached, by Connection Manager policy it doesn’t pass the key to the authentication server and you get moved to a Web site where you can download those bits that you need, either to upgrade your OS or to get the current version of the AV software. But you can’t connect to Corp Net. We keep you in a quarantine until the key is passed up from Connection Manager; again, making sure that only the right people are connecting via two-factor authentication, and that the device they’re using is trustworthy, meaning it’s got the right OS, it’s got the right patches, it’s running firewall, it doesn’t have a blank user password and it’s running the current version of our AV software.

Now, I already mentioned wireless and the use of 802.1x. So we’ve looked at the perimeter and said are there things we can do to tighten security at the perimeter and those included these factors here.

We also said, well, the perimeter and just making the perimeter safe is not going to be sufficient. We don’t believe that it’s a one-step trip. So we want to look inside the environment and make sure we’re doing things to make that secure. And we’ve done a lot of that over the last few years. We’ve enforced stronger passwords. We require strong passwords. We don’t allow people to repeat their password for 32 uses. We’re very focused on making sure people are using the right type of passwords and, in fact, that we’ve looked across the network to make sure there’s no blank system admin passwords either on Windows or in SQL.

We’ve reduced shared accounts among servers, and we as an IT organization we’re the worst offenders of that, in order to be productive constantly sharing accounts across servers, so we had to change many of our processes, because knowing if you’re an admin on one server makes you an admin on another, to make sure that we keep our environment more secure.

We’ve implemented application security across the environment. We looked at every application in the environment that is either Internet facing or has customer or partner information in it and did a cursory view of the security of that application. Anything that didn’t pass 100 percent that cursory view we did an in-depth review of that application and took apps offline until the issues that were in those applications were fixed. And not surprisingly we learned a lot going through that process.

We then worked with the Windows team and the development organization ,and all the training they had been doing over the course of the last few months with Windows, and are putting our internal developers, our IT developers and system admins through that same type of training to make sure they really understand what we define today as a secure application, what we define as the secure Internet application and to make sure they know our views on privacy and security throughout every application we build.

We’re now in the process of enforcing application of security patches to our machines. Now, those of you who know our products well know that patch management has not been a strong suit of our products historically. We’ve always used products like SMS to update software or to update service packs but not patches.

And so we’ve worked hard with the product groups to sit down and say, here’s what we believe we need as an industry and as a customer in order to manage our environment.

Now, we don’t manage desktops in Microsoft. I talked to you about the large number of desktops and the fact that we’re constantly in beta mode, and so we want people to have a lot of freedom to download new bits and try new things. And we’ll take the support cost that goes with it because of the learning our people get as they try new bits on their desktop.

On the other hand, we need to make sure we have a secure environment. And so while we don’t mandate what their desktops look like, we have to mandate when a patch goes out that it gets deployed.

And so as we sat down with the product group and talked about our specific needs they realized that there are different needs for different customer types and today we have a three-pronged strategy for patch management.

If you’re a home user or you have a small business, Windows Update is available. Windows Update is user initiated, so you can go out to a Web site, in this case Microsoft.com, you can see all available options and whether you patch your machine or not and you can pull them down. It’s a great feature. It’s very easy to do. But it’s going out to a Web site run by Microsoft. And so as an end user you may not mind doing that but as an enterprise or a medium business you might feel that what can be put on your machines, you want to have more control about which patches, in fact, you want implemented and where those patches come from.

And so for medium businesses we’ve introduced SUS or Software Update Services with the same features of being user initiated, going out to a Web site and pulling it down, but in this case the administrator, the IT administrator can decide by going out to the Microsoft site which patches they want to bring in to behind their firewall to put on their server to alert their end users to go get.

So the difference between the two is where it’s located — behind the firewall versus on Microsoft.com — and the ability for the administrator to decide which patches are going to be there instead of Microsoft putting all its patches up on Microsoft.com.

And that’s a big improvement. I think it works for many companies. But in terms of enterprise needs that doesn’t mean it’s what we need as an IT customer of Microsoft. And so with SMS and the SMS Software Update Services Feature Pack, the Feature Pack is really the solution to this problem. Again, it has user or administrator initiated deployment, so deciding what goes on the server is an administrative task. The administrator approves what update and what software is going to be used and the deployment is done behind the firewall just like SUS.

But it brings with it other features that aren’t there. With the Feature Pack you get reporting. You get to know, did the upgrade work, was the patch applied, and if not on which machine, so you can go out and figure out what those issues are. It allows for scheduling, so either the IT organization or the end user can decide when to schedule, as long as it’s in the window. And if the individual doesn’t do it, it can allow the IT organization to mandate it or do it for them.

So the ability to get reporting and to mandate a patch, so if we have a big issue that we need to get out right away we as an IT organization can make the decision that it has to happen, it has to happen now and you don’t get a vote. That’s a feature that we don’t want to deploy very often but when we absolutely need to have the advantage in order to protect our environment.

And so we’re excited about the Feature Pack, which is coming out very soon, and to give you a view of that Paul Barcoe-Walsh is here from the product group to give a view of how to use SMS in the environment. Paul, how are you?

PAUL BARCOE-WALSH: Just fine. How are you doing?

RICK DEVENUTI: Good, good.

PAUL BARCOE-WALSH: It’s good to see you here again.

As Rick said, we have a number of options obviously around trying to manage this critical update, security critical updates, one for the home basis, Windows Update, based on that technology and we have the Software Update Services, again mainly aimed at the small business. Why? Due to scaling, also reporting, as well as the scheduling, as Rick mentioned.

What I want to be able to show you, and I’m going to look at a couple of demonstrations here, one for the medium organization and that’s SUS and then also the SMS and the Software Update Feature Pack for the enterprise.

So let’s look quickly at the Software Update for the medium organization. And as we can see on this, I’m actually being notified that there are a number of critical security patches that need to be deployed in my organization. As the administrator I want to be happy that I want my end users and my machines to actually receive this. So again I’m looking on my SUS Web site and really it’s a Web console that’s connected to a server behind my firewall that’s basically I can pull down mission critical updates to the server, approve them, maybe do some investigation about them, and then I can push them out. I have my clients basically configured to connect to a My Windows Update Server, where they will receive those updates and can deploy them as needed.

So in this one here you can see I have a number of components there that I can actually go into and look around and see exactly what each of them are doing and what they mean to my desktops or maybe to my servers. I can click in on the details and it tells me in this particular one here that it’s for the XP platform. The locale is English. I may want to go in and take a look at the information around that and see exactly what this actual update means for an end user or for this machine I’m going to target it to.

Once I’m happy enough with that, I can just click on the approve and have that pushed out and basically have my machines, my desktops basically go to my Windows Update Server and pull that down and basically install it onto their system.

One or two things to note: I don’t build reports on that, so right now there’s no reports. I do some server-side logging, but again an administrator has to walk through each of those.

So for the enterprise what have we got? For the enterprise type of system we really have SMS, SMS right now be able to collect information, do software distribution and as Rick said earlier on, it wasn’t really aimed at just targeting critical patch management as such.

So you will see later this year available for SMS 2.0 the actual Software Update Service Feature Pack for SMS 2.0 and again what actually happens is it uses technology such as MBSA to go and find exactly what is happening on my machines, what type of patches are actually needed on my desktops or my servers and notifies me, so it puts this into my SMS database where I can basically do reports on and see exactly how my systems are doing.

I also get notified, for example, by Microsoft Security Notification Service of any new updates that are needed or that are available from Microsoft that I should actually investigate and look at to see which ones I should deploy, and in that we have basically a Web site again using the SMS Web reporting. We can drill in, look at the standard reports in there. I’ve broken it down into a number of categories such as the software reports. I can take a look at Software Update and in there I can just take a look at the applicable update for specific machines or to count the actual applicable updates by type, and the type that I’m looking at today is of the security.

So I go in and put in security here, click on Display and it shows me that there are a number of security patches that actually need to go out there. So again I can drill in as I did with SUS and see exactly what these mean. You can see there we’ve got a number of XML, HTTP controls that I need. If I want to, I can go out to the Microsoft Web site and read again more information about that and decide which ones of these do I have to deploy.

Again using SMS and the Feature Pack I can go back in and I can now take a look at the actual applicable software updates for a particular machine if I need to. So again in here I can put the actual machine details and just do a quick display and it tells me here on this specific machine I need to put these particular critical patch updates on there and I have to do it for more than just one machine obviously but for this machine this is what it needs.

At this point I could go to SMS 2.0 and you’ll see with the Feature Pack you have that extra functionality of being able to go in, go to All Tasks and do Distribute Updates. Distributing updates is now going to allow me to schedule an update, to target the update at specific machines that I’m looking at hitting.

So again I’m going to tell it the category that I’m looking for specifically is security. I’m going to go in and build a new update and I’m going to give it a name here and I’m going to tell it where I want to bring it down from Microsoft.com, so I’m giving it a source location that it can bring these updates from. And then it will give me a list of the ones that are up there. I may pick one or two of those to bring down. It goes out right now to Microsoft.com, pulls down those individual items and then I can see at this point how I want to target those.

Now, there’s a number of things here. Obviously I’m working closely with ITG. ITG wants to ensure that we treat desktops and servers differently. So, for example, if I’m sending out this patch it may be okay for me to reboot the desktop now but I may not be able to reboot a server now; I need to schedule it. I know Rick and his team are extremely aggressive making sure that we put this in here because we cannot just reboot a server straightaway. We need to schedule down time. And this functionality here allows me to specify that. If I want to just reboot or restart a desktop now I can go and do that as soon as that patch gets out there, or if I wanted to I could specify it for a server as well.

Also note one other thing that ITG really wanted in this was I want to be able to mandate this. So the functionality we’ve put in here, for example, if this is not initiated by a user, in this case I have here 48 hours but I could reduce that down to a period of time I want to automatically have this installed, which also means if a server has no one logged into it I want to have the ability to be able to send that patch to the server and have it automatically installed on that server without having to touch, manually touch each of those servers.

Then I go out and I can create the actual package and at this point I can choose what machines that I want to send this to. Obviously within your real environment you’re going to go and do it through a test platform, you’re going to ensure that you’re happy with the criteria and build your criteria before you put it into production, but I’m just going ahead and go straightaway because we’re happy with this and we’re going to go straightaway and specify that I want to send this out to all of my systems. All of my systems will take it down from a particular machine and I fire this package out to each of those systems. So at this point I know now that I’m also sending it out. And then based on that, based on them receiving it, based on them installing it I can basically have a report just notifying me that it has been done and done successfully.

RICK DEVENUTI: Paul, great. Thanks very much. Appreciate you coming out. (Applause.)

Clearly we’re excited about those features and ability to manage our own environment better than we do today.

So what’s IT all about at Microsoft and what’s IT about in general? I think it’s about being connected; that’s connected in those islands of data that were disparate before, it’s about connecting people with mobility and wireless. It’s about making our people more productive, looking at the applications, the information and the Web services available to make sure our people are more productive today than they were yesterday. It’s about the economics of bringing the best TCO to the infrastructure, reducing our costs, increasing our availability so that we can move more dollars from infrastructure needs to application needs and productivity. And finally it’s about dependability and being the most reliable available systems anywhere.

That’s what we’re doing today at Microsoft in the IT organization and that’s what we’re doing as a company to continue to bring business value to you, our customers, as well as us as an IT customer.

That’s what I came to talk about today. I want to thank you for your patience and I know it looks like we’re going long so thanks very much for your time today.

(Applause.)